Test Report: Docker_Linux_crio 21664

                    
                      0ce7767ba630d3046e785243932d5087fdf03a88:2025-10-26:42076
                    
                

Test fail (40/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.27
35 TestAddons/parallel/Registry 13.03
36 TestAddons/parallel/RegistryCreds 0.56
37 TestAddons/parallel/Ingress 483.82
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.34
41 TestAddons/parallel/CSI 369.45
42 TestAddons/parallel/Headlamp 2.69
43 TestAddons/parallel/CloudSpanner 5.29
44 TestAddons/parallel/LocalPath 8.19
45 TestAddons/parallel/NvidiaDevicePlugin 5.28
46 TestAddons/parallel/Yakd 6.26
47 TestAddons/parallel/AmdGpuDevicePlugin 5.27
90 TestFunctional/parallel/DashboardCmd 302.32
97 TestFunctional/parallel/ServiceCmdConnect 602.9
99 TestFunctional/parallel/PersistentVolumeClaim 368.94
103 TestFunctional/parallel/MySQL 602.82
119 TestFunctional/parallel/ServiceCmd/DeployApp 600.61
140 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
141 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
153 TestFunctional/parallel/ServiceCmd/Format 0.55
154 TestFunctional/parallel/ServiceCmd/URL 0.55
190 TestJSONOutput/pause/Command 1.6
196 TestJSONOutput/unpause/Command 1.3
260 TestPause/serial/Pause 5.38
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.4
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.4
312 TestStartStop/group/old-k8s-version/serial/Pause 6.81
315 TestStartStop/group/no-preload/serial/Pause 6.11
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.24
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.18
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.81
337 TestStartStop/group/newest-cni/serial/Pause 7.31
351 TestStartStop/group/embed-certs/serial/Pause 6.02
361 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.83
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable volcano --alsologtostderr -v=1: exit status 11 (264.670236ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:34.007256  854531 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:34.007409  854531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:34.007422  854531 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:34.007427  854531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:34.007621  854531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:34.007901  854531 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:34.008282  854531 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:34.008304  854531 addons.go:606] checking whether the cluster is paused
	I1026 14:16:34.008393  854531 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:34.008412  854531 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:34.008843  854531 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:34.028307  854531 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:34.028366  854531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:34.046557  854531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:34.147246  854531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:34.147346  854531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:34.178771  854531 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:34.178794  854531 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:34.178798  854531 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:34.178802  854531 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:34.178807  854531 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:34.178812  854531 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:34.178816  854531 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:34.178820  854531 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:34.178824  854531 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:34.178832  854531 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:34.178837  854531 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:34.178841  854531 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:34.178845  854531 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:34.178863  854531 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:34.178871  854531 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:34.178875  854531 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:34.178877  854531 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:34.178881  854531 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:34.178883  854531 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:34.178886  854531 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:34.178892  854531 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:34.178894  854531 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:34.178897  854531 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:34.178899  854531 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:34.178902  854531 cri.go:89] found id: ""
	I1026 14:16:34.178944  854531 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:34.194477  854531 out.go:203] 
	W1026 14:16:34.195724  854531 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:34.195741  854531 out.go:285] * 
	* 
	W1026 14:16:34.200509  854531 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:34.202046  854531 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.859231ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00317965s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003367389s
addons_test.go:392: (dbg) Run:  kubectl --context addons-459729 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-459729 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-459729 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.534681025s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable registry --alsologtostderr -v=1: exit status 11 (260.126038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:53.871712  857052 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:53.871974  857052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:53.871983  857052 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:53.871988  857052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:53.872209  857052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:53.872472  857052 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:53.872832  857052 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:53.872855  857052 addons.go:606] checking whether the cluster is paused
	I1026 14:16:53.872937  857052 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:53.872954  857052 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:53.873359  857052 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:53.893109  857052 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:53.893196  857052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:53.910460  857052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:54.010475  857052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:54.010557  857052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:54.040984  857052 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:54.041042  857052 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:54.041049  857052 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:54.041055  857052 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:54.041059  857052 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:54.041065  857052 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:54.041069  857052 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:54.041073  857052 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:54.041078  857052 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:54.041099  857052 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:54.041107  857052 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:54.041111  857052 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:54.041115  857052 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:54.041119  857052 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:54.041125  857052 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:54.041136  857052 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:54.041144  857052 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:54.041150  857052 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:54.041154  857052 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:54.041157  857052 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:54.041194  857052 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:54.041199  857052 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:54.041207  857052 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:54.041211  857052 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:54.041216  857052 cri.go:89] found id: ""
	I1026 14:16:54.041277  857052 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:54.056469  857052 out.go:203] 
	W1026 14:16:54.057655  857052 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:54.057680  857052 out.go:285] * 
	* 
	W1026 14:16:54.062340  857052 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:54.063762  857052 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.03s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.56s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.753106ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-459729
addons_test.go:332: (dbg) Run:  kubectl --context addons-459729 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (284.541475ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:52.256280  856730 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:52.256572  856730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:52.256582  856730 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:52.256587  856730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:52.256857  856730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:52.257224  856730 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:52.257624  856730 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:52.257645  856730 addons.go:606] checking whether the cluster is paused
	I1026 14:16:52.257739  856730 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:52.257752  856730 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:52.258198  856730 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:52.279103  856730 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:52.279195  856730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:52.311482  856730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:52.418925  856730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:52.419036  856730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:52.451080  856730 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:52.451098  856730 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:52.451102  856730 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:52.451107  856730 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:52.451110  856730 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:52.451114  856730 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:52.451116  856730 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:52.451120  856730 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:52.451124  856730 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:52.451139  856730 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:52.451147  856730 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:52.451151  856730 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:52.451155  856730 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:52.451171  856730 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:52.451176  856730 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:52.451196  856730 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:52.451207  856730 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:52.451212  856730 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:52.451215  856730 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:52.451218  856730 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:52.451221  856730 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:52.451223  856730 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:52.451226  856730 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:52.451228  856730 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:52.451230  856730 cri.go:89] found id: ""
	I1026 14:16:52.451269  856730 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:52.467565  856730 out.go:203] 
	W1026 14:16:52.468756  856730 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:52.468783  856730 out.go:285] * 
	* 
	W1026 14:16:52.473722  856730 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:52.475362  856730 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.56s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (483.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-459729 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-459729 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-459729 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d99505c9-bb9c-4c52-90e0-9ab7033b32bf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-459729 -n addons-459729
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-26 14:24:52.540692024 +0000 UTC m=+646.096089412
addons_test.go:252: (dbg) Run:  kubectl --context addons-459729 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-459729 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-459729/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:16:52 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwdp7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xwdp7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m                     default-scheduler  Successfully assigned default/nginx to addons-459729
Warning  Failed     3m10s (x2 over 6m59s)  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m10s (x2 over 6m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    2m55s (x2 over 6m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m55s (x2 over 6m59s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    2m41s (x3 over 8m)     kubelet            Pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-459729 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-459729 logs nginx -n default: exit status 1 (76.216415ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-459729 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-459729
helpers_test.go:243: (dbg) docker inspect addons-459729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99",
	        "Created": "2025-10-26T14:14:40.52606534Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 847075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:14:40.558709556Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/hosts",
	        "LogPath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99-json.log",
	        "Name": "/addons-459729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-459729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-459729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99",
	                "LowerDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-459729",
	                "Source": "/var/lib/docker/volumes/addons-459729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-459729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-459729",
	                "name.minikube.sigs.k8s.io": "addons-459729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27cac62847effb19906009c5979fe40bbf685a449ce5b4deb39ded6dddff8b6f",
	            "SandboxKey": "/var/run/docker/netns/27cac62847ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-459729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:b4:86:17:1e:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "35dc3def6cc813d1d5c906424df9f8355bd88f05b16bb1826e9958e3c782a1a4",
	                    "EndpointID": "3162d9d223ad2c1fef671da2ec9c0200d2ce47e2eeda4daaba75d1967d709ae6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-459729",
	                        "fc6e75fab9c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-459729 -n addons-459729
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-459729 logs -n 25: (1.204437618s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p download-docker-939440 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-939440 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ -p download-docker-939440                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-939440 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ --download-only -p binary-mirror-114305 --alsologtostderr --binary-mirror http://127.0.0.1:44689 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-114305   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ -p binary-mirror-114305                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-114305   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ addons  │ enable dashboard -p addons-459729                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-459729                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ start   │ -p addons-459729 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:16 UTC │
	│ addons  │ addons-459729 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-459729 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ ssh     │ addons-459729 ssh cat /opt/local-path-provisioner/pvc-618f90bd-473d-4ea6-99a0-92fd8df748d0_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │ 26 Oct 25 14:16 UTC │
	│ addons  │ addons-459729 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-459729                                                                                                                                                                                                                                                                                                                                                                                           │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │ 26 Oct 25 14:16 UTC │
	│ addons  │ addons-459729 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ ip      │ addons-459729 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │ 26 Oct 25 14:16 UTC │
	│ addons  │ addons-459729 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:17 UTC │                     │
	│ addons  │ addons-459729 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:23 UTC │                     │
	│ addons  │ addons-459729 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:17.112515  846424 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:17.112795  846424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:17.112803  846424 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:17.112807  846424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:17.112990  846424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:14:17.113534  846424 out.go:368] Setting JSON to false
	I1026 14:14:17.114463  846424 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7005,"bootTime":1761481052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:14:17.114570  846424 start.go:141] virtualization: kvm guest
	I1026 14:14:17.116382  846424 out.go:179] * [addons-459729] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:14:17.117587  846424 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:14:17.117592  846424 notify.go:220] Checking for updates...
	I1026 14:14:17.118732  846424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:17.119875  846424 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:14:17.121054  846424 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:14:17.122198  846424 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:14:17.123215  846424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:14:17.124682  846424 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:17.149310  846424 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:14:17.149487  846424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:17.207621  846424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 14:14:17.197494844 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:17.207741  846424 docker.go:318] overlay module found
	I1026 14:14:17.209500  846424 out.go:179] * Using the docker driver based on user configuration
	I1026 14:14:17.210611  846424 start.go:305] selected driver: docker
	I1026 14:14:17.210627  846424 start.go:925] validating driver "docker" against <nil>
	I1026 14:14:17.210642  846424 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:14:17.211282  846424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:17.265537  846424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 14:14:17.255623393 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:17.265767  846424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:17.266017  846424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:14:17.268242  846424 out.go:179] * Using Docker driver with root privileges
	I1026 14:14:17.269488  846424 cni.go:84] Creating CNI manager for ""
	I1026 14:14:17.269559  846424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:17.269572  846424 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 14:14:17.269643  846424 start.go:349] cluster config:
	{Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1026 14:14:17.270969  846424 out.go:179] * Starting "addons-459729" primary control-plane node in "addons-459729" cluster
	I1026 14:14:17.272134  846424 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 14:14:17.273402  846424 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 14:14:17.274551  846424 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:17.274581  846424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 14:14:17.274602  846424 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 14:14:17.274611  846424 cache.go:58] Caching tarball of preloaded images
	I1026 14:14:17.274710  846424 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 14:14:17.274721  846424 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 14:14:17.275086  846424 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/config.json ...
	I1026 14:14:17.275112  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/config.json: {Name:mk9529b624fed8d03806b178f8e915dee8aa0e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:17.292287  846424 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 14:14:17.292466  846424 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 14:14:17.292494  846424 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 14:14:17.292500  846424 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 14:14:17.292513  846424 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 14:14:17.292520  846424 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1026 14:14:29.432150  846424 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1026 14:14:29.432207  846424 cache.go:232] Successfully downloaded all kic artifacts
	I1026 14:14:29.432255  846424 start.go:360] acquireMachinesLock for addons-459729: {Name:mk6d98d5da8e9c6ee516b00ba1c75ff50ea84eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 14:14:29.432358  846424 start.go:364] duration metric: took 82.777µs to acquireMachinesLock for "addons-459729"
	I1026 14:14:29.432384  846424 start.go:93] Provisioning new machine with config: &{Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:14:29.432464  846424 start.go:125] createHost starting for "" (driver="docker")
	I1026 14:14:29.434070  846424 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 14:14:29.434326  846424 start.go:159] libmachine.API.Create for "addons-459729" (driver="docker")
	I1026 14:14:29.434382  846424 client.go:168] LocalClient.Create starting
	I1026 14:14:29.434474  846424 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 14:14:29.636359  846424 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 14:14:29.991463  846424 cli_runner.go:164] Run: docker network inspect addons-459729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 14:14:30.008472  846424 cli_runner.go:211] docker network inspect addons-459729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 14:14:30.008584  846424 network_create.go:284] running [docker network inspect addons-459729] to gather additional debugging logs...
	I1026 14:14:30.008611  846424 cli_runner.go:164] Run: docker network inspect addons-459729
	W1026 14:14:30.026519  846424 cli_runner.go:211] docker network inspect addons-459729 returned with exit code 1
	I1026 14:14:30.026548  846424 network_create.go:287] error running [docker network inspect addons-459729]: docker network inspect addons-459729: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-459729 not found
	I1026 14:14:30.026559  846424 network_create.go:289] output of [docker network inspect addons-459729]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-459729 not found
	
	** /stderr **
	I1026 14:14:30.026678  846424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:14:30.043803  846424 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021485f0}
	I1026 14:14:30.043866  846424 network_create.go:124] attempt to create docker network addons-459729 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 14:14:30.043913  846424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-459729 addons-459729
	I1026 14:14:30.100466  846424 network_create.go:108] docker network addons-459729 192.168.49.0/24 created
	I1026 14:14:30.100509  846424 kic.go:121] calculated static IP "192.168.49.2" for the "addons-459729" container
	I1026 14:14:30.100583  846424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 14:14:30.116904  846424 cli_runner.go:164] Run: docker volume create addons-459729 --label name.minikube.sigs.k8s.io=addons-459729 --label created_by.minikube.sigs.k8s.io=true
	I1026 14:14:30.135222  846424 oci.go:103] Successfully created a docker volume addons-459729
	I1026 14:14:30.135299  846424 cli_runner.go:164] Run: docker run --rm --name addons-459729-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-459729 --entrypoint /usr/bin/test -v addons-459729:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 14:14:36.146492  846424 cli_runner.go:217] Completed: docker run --rm --name addons-459729-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-459729 --entrypoint /usr/bin/test -v addons-459729:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.011135666s)
	I1026 14:14:36.146530  846424 oci.go:107] Successfully prepared a docker volume addons-459729
	I1026 14:14:36.146583  846424 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:36.146616  846424 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 14:14:36.146686  846424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-459729:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 14:14:40.450984  846424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-459729:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.304224683s)
	I1026 14:14:40.451018  846424 kic.go:203] duration metric: took 4.304399454s to extract preloaded images to volume ...
	W1026 14:14:40.451121  846424 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 14:14:40.451155  846424 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 14:14:40.451213  846424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 14:14:40.510278  846424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-459729 --name addons-459729 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-459729 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-459729 --network addons-459729 --ip 192.168.49.2 --volume addons-459729:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 14:14:40.765991  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Running}}
	I1026 14:14:40.784464  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:14:40.802012  846424 cli_runner.go:164] Run: docker exec addons-459729 stat /var/lib/dpkg/alternatives/iptables
	I1026 14:14:40.851940  846424 oci.go:144] the created container "addons-459729" has a running status.
	I1026 14:14:40.851973  846424 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa...
	I1026 14:14:40.949694  846424 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 14:14:40.978174  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:14:41.000243  846424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 14:14:41.000276  846424 kic_runner.go:114] Args: [docker exec --privileged addons-459729 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 14:14:41.043571  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:14:41.069582  846424 machine.go:93] provisionDockerMachine start ...
	I1026 14:14:41.069796  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.093554  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:41.093778  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:41.093791  846424 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 14:14:41.243331  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-459729
	
	I1026 14:14:41.243363  846424 ubuntu.go:182] provisioning hostname "addons-459729"
	I1026 14:14:41.243419  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.261776  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:41.262051  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:41.262072  846424 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-459729 && echo "addons-459729" | sudo tee /etc/hostname
	I1026 14:14:41.414391  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-459729
	
	I1026 14:14:41.414497  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.433449  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:41.433812  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:41.433851  846424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-459729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-459729/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-459729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 14:14:41.575368  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 14:14:41.575416  846424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 14:14:41.575444  846424 ubuntu.go:190] setting up certificates
	I1026 14:14:41.575464  846424 provision.go:84] configureAuth start
	I1026 14:14:41.575530  846424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-459729
	I1026 14:14:41.593069  846424 provision.go:143] copyHostCerts
	I1026 14:14:41.593211  846424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 14:14:41.593370  846424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 14:14:41.593473  846424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 14:14:41.593572  846424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.addons-459729 san=[127.0.0.1 192.168.49.2 addons-459729 localhost minikube]
	I1026 14:14:41.952749  846424 provision.go:177] copyRemoteCerts
	I1026 14:14:41.952809  846424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 14:14:41.952864  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.971059  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.071814  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 14:14:42.091550  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 14:14:42.109573  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 14:14:42.127661  846424 provision.go:87] duration metric: took 552.178827ms to configureAuth
	I1026 14:14:42.127694  846424 ubuntu.go:206] setting minikube options for container-runtime
	I1026 14:14:42.127910  846424 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:14:42.128035  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.145755  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:42.145991  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:42.146015  846424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 14:14:42.398484  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 14:14:42.398510  846424 machine.go:96] duration metric: took 1.328895029s to provisionDockerMachine
	I1026 14:14:42.398521  846424 client.go:171] duration metric: took 12.964130689s to LocalClient.Create
	I1026 14:14:42.398541  846424 start.go:167] duration metric: took 12.964216103s to libmachine.API.Create "addons-459729"
	I1026 14:14:42.398551  846424 start.go:293] postStartSetup for "addons-459729" (driver="docker")
	I1026 14:14:42.398565  846424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 14:14:42.398618  846424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 14:14:42.398665  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.416371  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.518463  846424 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 14:14:42.521931  846424 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 14:14:42.521963  846424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 14:14:42.521977  846424 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 14:14:42.522046  846424 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 14:14:42.522073  846424 start.go:296] duration metric: took 123.514687ms for postStartSetup
	I1026 14:14:42.522380  846424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-459729
	I1026 14:14:42.540283  846424 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/config.json ...
	I1026 14:14:42.540575  846424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:14:42.540629  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.558249  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.655957  846424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 14:14:42.660462  846424 start.go:128] duration metric: took 13.22797972s to createHost
	I1026 14:14:42.660486  846424 start.go:83] releasing machines lock for "addons-459729", held for 13.228116528s
	I1026 14:14:42.660551  846424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-459729
	I1026 14:14:42.677972  846424 ssh_runner.go:195] Run: cat /version.json
	I1026 14:14:42.678042  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.678103  846424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 14:14:42.678186  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.696981  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.697266  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.856351  846424 ssh_runner.go:195] Run: systemctl --version
	I1026 14:14:42.863288  846424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 14:14:42.900301  846424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 14:14:42.905120  846424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 14:14:42.905196  846424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 14:14:42.932600  846424 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 14:14:42.932623  846424 start.go:495] detecting cgroup driver to use...
	I1026 14:14:42.932656  846424 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 14:14:42.932705  846424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 14:14:42.948987  846424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 14:14:42.961218  846424 docker.go:218] disabling cri-docker service (if available) ...
	I1026 14:14:42.961271  846424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 14:14:42.977976  846424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 14:14:42.995853  846424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 14:14:43.078675  846424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 14:14:43.167078  846424 docker.go:234] disabling docker service ...
	I1026 14:14:43.167150  846424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 14:14:43.186433  846424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 14:14:43.199219  846424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 14:14:43.281310  846424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 14:14:43.363611  846424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 14:14:43.376627  846424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 14:14:43.391082  846424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 14:14:43.391147  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.401654  846424 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 14:14:43.401722  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.411314  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.420752  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.430053  846424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 14:14:43.438422  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.447584  846424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.462065  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.471427  846424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 14:14:43.478920  846424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 14:14:43.486416  846424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:14:43.566863  846424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 14:14:43.671842  846424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 14:14:43.671918  846424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 14:14:43.675998  846424 start.go:563] Will wait 60s for crictl version
	I1026 14:14:43.676061  846424 ssh_runner.go:195] Run: which crictl
	I1026 14:14:43.679709  846424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 14:14:43.706317  846424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 14:14:43.706420  846424 ssh_runner.go:195] Run: crio --version
	I1026 14:14:43.734316  846424 ssh_runner.go:195] Run: crio --version
	I1026 14:14:43.764384  846424 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 14:14:43.765785  846424 cli_runner.go:164] Run: docker network inspect addons-459729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:14:43.783001  846424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 14:14:43.787207  846424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:14:43.797548  846424 kubeadm.go:883] updating cluster {Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 14:14:43.797721  846424 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:43.797793  846424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:14:43.832123  846424 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:14:43.832145  846424 crio.go:433] Images already preloaded, skipping extraction
	I1026 14:14:43.832214  846424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:14:43.858842  846424 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:14:43.858871  846424 cache_images.go:85] Images are preloaded, skipping loading
	I1026 14:14:43.858883  846424 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 14:14:43.859030  846424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-459729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 14:14:43.859110  846424 ssh_runner.go:195] Run: crio config
	I1026 14:14:43.904710  846424 cni.go:84] Creating CNI manager for ""
	I1026 14:14:43.904736  846424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:43.904762  846424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 14:14:43.904789  846424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-459729 NodeName:addons-459729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 14:14:43.904928  846424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-459729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 14:14:43.904991  846424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 14:14:43.913572  846424 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 14:14:43.913638  846424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 14:14:43.921876  846424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 14:14:43.934931  846424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 14:14:43.950730  846424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1026 14:14:43.963901  846424 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 14:14:43.967671  846424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:14:43.977851  846424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:14:44.058772  846424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:14:44.083941  846424 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729 for IP: 192.168.49.2
	I1026 14:14:44.083989  846424 certs.go:195] generating shared ca certs ...
	I1026 14:14:44.084018  846424 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:44.084226  846424 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 14:14:44.387912  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt ...
	I1026 14:14:44.387946  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt: {Name:mk8933e3107ac3223c09abfcc2b23b2a267f80dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:44.388133  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key ...
	I1026 14:14:44.388149  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key: {Name:mk6b1973d9c275e0f32b5e6221cf09f2bcd1d12d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:44.388250  846424 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 14:14:45.246605  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt ...
	I1026 14:14:45.246640  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt: {Name:mkdb300b113fc66de4a4109eb2097856fa215e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.246821  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key ...
	I1026 14:14:45.246832  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key: {Name:mkaba3ad2bc7a1a50d30bd9bfd3aea7c19e5fda9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.246922  846424 certs.go:257] generating profile certs ...
	I1026 14:14:45.247013  846424 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.key
	I1026 14:14:45.247033  846424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt with IP's: []
	I1026 14:14:45.334595  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt ...
	I1026 14:14:45.334626  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: {Name:mkafadf8981207eceb9ebbe4962ff018f519fecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.334804  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.key ...
	I1026 14:14:45.334815  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.key: {Name:mka2fbae2418418d747b82adac0fb2b7f375ffa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.334888  846424 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1
	I1026 14:14:45.334908  846424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1026 14:14:45.666093  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1 ...
	I1026 14:14:45.666125  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1: {Name:mkb948c94234f3b4bc97a7b01df3ae78190037f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.666319  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1 ...
	I1026 14:14:45.666337  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1: {Name:mk3bf95757956aa10cef36d1b4e59b884575ea91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.666413  846424 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt
	I1026 14:14:45.666512  846424 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key
	I1026 14:14:45.666569  846424 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key
	I1026 14:14:45.666596  846424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt with IP's: []
	I1026 14:14:45.921156  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt ...
	I1026 14:14:45.921205  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt: {Name:mkbc119a7d5f48960c3f21d5f4d887a967005987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.921387  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key ...
	I1026 14:14:45.921401  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key: {Name:mk005d3953795c30c971b42e066689f23e94bbc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.921650  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 14:14:45.921691  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 14:14:45.921717  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 14:14:45.921738  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 14:14:45.922419  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 14:14:45.941068  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 14:14:45.958551  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 14:14:45.976346  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 14:14:45.994052  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 14:14:46.011477  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 14:14:46.028955  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 14:14:46.046187  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 14:14:46.063408  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 14:14:46.082572  846424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 14:14:46.095043  846424 ssh_runner.go:195] Run: openssl version
	I1026 14:14:46.101206  846424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 14:14:46.112299  846424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:14:46.116268  846424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:14:46.116319  846424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:14:46.152435  846424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 14:14:46.161706  846424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 14:14:46.165576  846424 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 14:14:46.165626  846424 kubeadm.go:400] StartCluster: {Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:14:46.165713  846424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:14:46.165765  846424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:14:46.194501  846424 cri.go:89] found id: ""
	I1026 14:14:46.194576  846424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 14:14:46.202715  846424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 14:14:46.211023  846424 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 14:14:46.211084  846424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 14:14:46.219223  846424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 14:14:46.219242  846424 kubeadm.go:157] found existing configuration files:
	
	I1026 14:14:46.219304  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 14:14:46.227401  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 14:14:46.227464  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 14:14:46.234983  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 14:14:46.242551  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 14:14:46.242605  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 14:14:46.249969  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 14:14:46.257567  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 14:14:46.257615  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 14:14:46.265426  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 14:14:46.273171  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 14:14:46.273236  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 14:14:46.280562  846424 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 14:14:46.343303  846424 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 14:14:46.403244  846424 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 14:14:56.860323  846424 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 14:14:56.860407  846424 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 14:14:56.860530  846424 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 14:14:56.860618  846424 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 14:14:56.860662  846424 kubeadm.go:318] OS: Linux
	I1026 14:14:56.860706  846424 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 14:14:56.860748  846424 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 14:14:56.860797  846424 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 14:14:56.860866  846424 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 14:14:56.860933  846424 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 14:14:56.861010  846424 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 14:14:56.861057  846424 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 14:14:56.861095  846424 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 14:14:56.861201  846424 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 14:14:56.861325  846424 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 14:14:56.861408  846424 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 14:14:56.861499  846424 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 14:14:56.863767  846424 out.go:252]   - Generating certificates and keys ...
	I1026 14:14:56.863843  846424 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 14:14:56.863905  846424 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 14:14:56.863967  846424 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 14:14:56.864073  846424 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 14:14:56.864145  846424 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 14:14:56.864216  846424 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 14:14:56.864284  846424 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 14:14:56.864408  846424 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-459729 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:14:56.864455  846424 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 14:14:56.864552  846424 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-459729 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:14:56.864612  846424 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 14:14:56.864666  846424 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 14:14:56.864721  846424 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 14:14:56.864809  846424 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 14:14:56.864880  846424 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 14:14:56.864955  846424 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 14:14:56.865011  846424 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 14:14:56.865071  846424 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 14:14:56.865154  846424 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 14:14:56.865256  846424 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 14:14:56.865342  846424 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 14:14:56.866657  846424 out.go:252]   - Booting up control plane ...
	I1026 14:14:56.866747  846424 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 14:14:56.866847  846424 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 14:14:56.866934  846424 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 14:14:56.867095  846424 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 14:14:56.867202  846424 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 14:14:56.867333  846424 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 14:14:56.867446  846424 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 14:14:56.867518  846424 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 14:14:56.867705  846424 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 14:14:56.867847  846424 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 14:14:56.867935  846424 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001058368s
	I1026 14:14:56.868063  846424 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 14:14:56.868199  846424 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1026 14:14:56.868310  846424 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 14:14:56.868408  846424 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 14:14:56.868533  846424 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.557698122s
	I1026 14:14:56.868636  846424 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.268793474s
	I1026 14:14:56.868740  846424 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001941586s
	I1026 14:14:56.868848  846424 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 14:14:56.868985  846424 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 14:14:56.869074  846424 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 14:14:56.869319  846424 kubeadm.go:318] [mark-control-plane] Marking the node addons-459729 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 14:14:56.869423  846424 kubeadm.go:318] [bootstrap-token] Using token: f6fn21.ali5nckn8rkh7x29
	I1026 14:14:56.871880  846424 out.go:252]   - Configuring RBAC rules ...
	I1026 14:14:56.871970  846424 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 14:14:56.872081  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 14:14:56.872291  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 14:14:56.872503  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 14:14:56.872682  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 14:14:56.872826  846424 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 14:14:56.872987  846424 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 14:14:56.873058  846424 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 14:14:56.873120  846424 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 14:14:56.873133  846424 kubeadm.go:318] 
	I1026 14:14:56.873228  846424 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 14:14:56.873240  846424 kubeadm.go:318] 
	I1026 14:14:56.873354  846424 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 14:14:56.873363  846424 kubeadm.go:318] 
	I1026 14:14:56.873405  846424 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 14:14:56.873458  846424 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 14:14:56.873503  846424 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 14:14:56.873509  846424 kubeadm.go:318] 
	I1026 14:14:56.873555  846424 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 14:14:56.873560  846424 kubeadm.go:318] 
	I1026 14:14:56.873597  846424 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 14:14:56.873603  846424 kubeadm.go:318] 
	I1026 14:14:56.873643  846424 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 14:14:56.873707  846424 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 14:14:56.873765  846424 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 14:14:56.873770  846424 kubeadm.go:318] 
	I1026 14:14:56.873885  846424 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 14:14:56.873950  846424 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 14:14:56.873955  846424 kubeadm.go:318] 
	I1026 14:14:56.874020  846424 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token f6fn21.ali5nckn8rkh7x29 \
	I1026 14:14:56.874104  846424 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 14:14:56.874125  846424 kubeadm.go:318] 	--control-plane 
	I1026 14:14:56.874131  846424 kubeadm.go:318] 
	I1026 14:14:56.874231  846424 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 14:14:56.874245  846424 kubeadm.go:318] 
	I1026 14:14:56.874359  846424 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token f6fn21.ali5nckn8rkh7x29 \
	I1026 14:14:56.874513  846424 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 14:14:56.874526  846424 cni.go:84] Creating CNI manager for ""
	I1026 14:14:56.874533  846424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:56.876103  846424 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 14:14:56.877647  846424 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 14:14:56.882227  846424 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 14:14:56.882247  846424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 14:14:56.895793  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 14:14:57.106713  846424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 14:14:57.106824  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:57.106854  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-459729 minikube.k8s.io/updated_at=2025_10_26T14_14_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=addons-459729 minikube.k8s.io/primary=true
	I1026 14:14:57.117887  846424 ops.go:34] apiserver oom_adj: -16
	I1026 14:14:57.187931  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:57.688917  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:58.188959  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:58.688895  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:59.188658  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:59.688052  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:00.188849  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:00.687985  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:01.188637  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:01.688698  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:01.754745  846424 kubeadm.go:1113] duration metric: took 4.647991318s to wait for elevateKubeSystemPrivileges
	I1026 14:15:01.754787  846424 kubeadm.go:402] duration metric: took 15.58916607s to StartCluster
	I1026 14:15:01.754836  846424 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:01.754978  846424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:15:01.755482  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:01.755722  846424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 14:15:01.755738  846424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:15:01.755806  846424 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 14:15:01.755939  846424 addons.go:69] Setting yakd=true in profile "addons-459729"
	I1026 14:15:01.755964  846424 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-459729"
	I1026 14:15:01.755989  846424 addons.go:69] Setting registry=true in profile "addons-459729"
	I1026 14:15:01.756000  846424 addons.go:69] Setting inspektor-gadget=true in profile "addons-459729"
	I1026 14:15:01.756006  846424 addons.go:238] Setting addon registry=true in "addons-459729"
	I1026 14:15:01.756016  846424 addons.go:238] Setting addon inspektor-gadget=true in "addons-459729"
	I1026 14:15:01.756040  846424 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:01.756049  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756052  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756034  846424 addons.go:69] Setting ingress=true in profile "addons-459729"
	I1026 14:15:01.756078  846424 addons.go:238] Setting addon ingress=true in "addons-459729"
	I1026 14:15:01.756055  846424 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-459729"
	I1026 14:15:01.756096  846424 addons.go:69] Setting registry-creds=true in profile "addons-459729"
	I1026 14:15:01.756104  846424 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-459729"
	I1026 14:15:01.756113  846424 addons.go:69] Setting default-storageclass=true in profile "addons-459729"
	I1026 14:15:01.756115  846424 addons.go:69] Setting storage-provisioner=true in profile "addons-459729"
	I1026 14:15:01.756130  846424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-459729"
	I1026 14:15:01.756140  846424 addons.go:238] Setting addon storage-provisioner=true in "addons-459729"
	I1026 14:15:01.756147  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756152  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.755989  846424 addons.go:69] Setting ingress-dns=true in profile "addons-459729"
	I1026 14:15:01.756657  846424 addons.go:238] Setting addon ingress-dns=true in "addons-459729"
	I1026 14:15:01.756719  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756106  846424 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-459729"
	I1026 14:15:01.756889  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756910  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757017  846424 addons.go:69] Setting metrics-server=true in profile "addons-459729"
	I1026 14:15:01.757045  846424 addons.go:238] Setting addon metrics-server=true in "addons-459729"
	I1026 14:15:01.757082  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757139  846424 addons.go:69] Setting volcano=true in profile "addons-459729"
	I1026 14:15:01.757156  846424 addons.go:238] Setting addon volcano=true in "addons-459729"
	I1026 14:15:01.757203  846424 addons.go:69] Setting gcp-auth=true in profile "addons-459729"
	I1026 14:15:01.757206  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757222  846424 mustload.go:65] Loading cluster: addons-459729
	I1026 14:15:01.757433  846424 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:01.757547  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.757549  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.757698  846424 addons.go:69] Setting volumesnapshots=true in profile "addons-459729"
	I1026 14:15:01.757716  846424 addons.go:238] Setting addon volumesnapshots=true in "addons-459729"
	I1026 14:15:01.757734  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.757739  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757840  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.758388  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.759608  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.755980  846424 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-459729"
	I1026 14:15:01.760402  846424 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-459729"
	I1026 14:15:01.760438  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756108  846424 addons.go:238] Setting addon registry-creds=true in "addons-459729"
	I1026 14:15:01.760907  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.761372  846424 out.go:179] * Verifying Kubernetes components...
	I1026 14:15:01.761867  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.761926  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.762184  846424 addons.go:69] Setting cloud-spanner=true in profile "addons-459729"
	I1026 14:15:01.762211  846424 addons.go:238] Setting addon cloud-spanner=true in "addons-459729"
	I1026 14:15:01.762241  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.755981  846424 addons.go:238] Setting addon yakd=true in "addons-459729"
	I1026 14:15:01.762447  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756085  846424 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-459729"
	I1026 14:15:01.762581  846424 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-459729"
	I1026 14:15:01.763509  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763699  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763743  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763750  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763779  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.764110  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.764900  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.765241  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.765248  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.768115  846424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:01.824394  846424 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 14:15:01.826325  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 14:15:01.826360  846424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 14:15:01.826434  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.827641  846424 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-459729"
	I1026 14:15:01.827777  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.828346  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	W1026 14:15:01.835680  846424 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 14:15:01.838243  846424 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 14:15:01.838670  846424 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1026 14:15:01.838918  846424 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 14:15:01.839837  846424 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 14:15:01.840040  846424 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 14:15:01.840135  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.840788  846424 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:01.840810  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 14:15:01.840875  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.841970  846424 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:01.843548  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 14:15:01.842268  846424 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 14:15:01.843080  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 14:15:01.843369  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.845123  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.846927  846424 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:01.846947  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 14:15:01.847004  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.856949  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 14:15:01.856978  846424 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 14:15:01.857056  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.860776  846424 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 14:15:01.867308  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 14:15:01.867357  846424 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 14:15:01.868311  846424 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:01.868329  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 14:15:01.868399  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.868677  846424 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 14:15:01.871040  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 14:15:01.871111  846424 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 14:15:01.872855  846424 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 14:15:01.872878  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 14:15:01.872949  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.873125  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 14:15:01.873516  846424 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:01.873535  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 14:15:01.873835  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.877387  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 14:15:01.879560  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 14:15:01.882349  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 14:15:01.883556  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 14:15:01.892467  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 14:15:01.893569  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 14:15:01.893595  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 14:15:01.893667  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.905909  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.906029  846424 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 14:15:01.908203  846424 out.go:179]   - Using image docker.io/busybox:stable
	I1026 14:15:01.912234  846424 addons.go:238] Setting addon default-storageclass=true in "addons-459729"
	I1026 14:15:01.913190  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.913688  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.914316  846424 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:01.914397  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 14:15:01.914467  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.925284  846424 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 14:15:01.929356  846424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 14:15:01.930036  846424 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:01.930058  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 14:15:01.930129  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.936261  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.937883  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.938657  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:01.940626  846424 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 14:15:01.941914  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 14:15:01.942000  846424 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 14:15:01.942101  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.941961  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 14:15:01.945791  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.945864  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.948625  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:01.949928  846424 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:01.949982  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 14:15:01.950059  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.955204  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.970925  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.975351  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.976053  846424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:15:01.978630  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.991435  846424 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:01.991462  846424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 14:15:01.991528  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.991780  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.016851  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	W1026 14:15:02.019263  846424 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 14:15:02.019305  846424 retry.go:31] will retry after 217.923962ms: ssh: handshake failed: EOF
	I1026 14:15:02.023195  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.032276  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.035819  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.039781  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.125207  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:02.136547  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:02.141012  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 14:15:02.141040  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 14:15:02.149864  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:02.150107  846424 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 14:15:02.150133  846424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 14:15:02.153611  846424 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 14:15:02.153638  846424 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 14:15:02.155650  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 14:15:02.155673  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 14:15:02.157330  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:02.160138  846424 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:02.160154  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 14:15:02.168525  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 14:15:02.168554  846424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 14:15:02.188885  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:02.190931  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 14:15:02.190953  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 14:15:02.191058  846424 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 14:15:02.191119  846424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 14:15:02.195824  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 14:15:02.195847  846424 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 14:15:02.196657  846424 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:02.196677  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 14:15:02.197637  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:02.200528  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:02.207552  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:02.207579  846424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 14:15:02.232493  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:02.235058  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 14:15:02.235104  846424 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 14:15:02.247417  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:02.247703  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 14:15:02.247731  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 14:15:02.254701  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:02.261459  846424 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 14:15:02.261489  846424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 14:15:02.297299  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 14:15:02.297343  846424 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 14:15:02.298507  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:02.314881  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 14:15:02.314916  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 14:15:02.328700  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 14:15:02.328736  846424 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 14:15:02.358580  846424 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 14:15:02.359543  846424 node_ready.go:35] waiting up to 6m0s for node "addons-459729" to be "Ready" ...
	I1026 14:15:02.371344  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:02.371372  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 14:15:02.404369  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 14:15:02.404399  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 14:15:02.424439  846424 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:02.424528  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 14:15:02.442236  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:02.460571  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 14:15:02.460657  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 14:15:02.502911  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:02.534388  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 14:15:02.534419  846424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 14:15:02.545901  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:02.614728  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 14:15:02.614838  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 14:15:02.667295  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 14:15:02.667523  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 14:15:02.707698  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:15:02.707786  846424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 14:15:02.747588  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:15:02.873331  846424 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-459729" context rescaled to 1 replicas
	I1026 14:15:03.502753  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.302179853s)
	I1026 14:15:03.502793  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.270259468s)
	I1026 14:15:03.502801  846424 addons.go:479] Verifying addon ingress=true in "addons-459729"
	I1026 14:15:03.503063  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255611006s)
	I1026 14:15:03.503098  846424 addons.go:479] Verifying addon metrics-server=true in "addons-459729"
	I1026 14:15:03.503181  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.248430064s)
	W1026 14:15:03.503268  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:03.503289  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.204745558s)
	I1026 14:15:03.503322  846424 addons.go:479] Verifying addon registry=true in "addons-459729"
	I1026 14:15:03.503380  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.061046188s)
	I1026 14:15:03.503295  846424 retry.go:31] will retry after 148.010934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:03.504599  846424 out.go:179] * Verifying registry addon...
	I1026 14:15:03.504631  846424 out.go:179] * Verifying ingress addon...
	I1026 14:15:03.507305  846424 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-459729 service yakd-dashboard -n yakd-dashboard
	
	I1026 14:15:03.508086  846424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 14:15:03.508142  846424 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 14:15:03.511447  846424 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:15:03.511469  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:03.511568  846424 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 14:15:03.511589  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:03.651987  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:03.931773  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.428808877s)
	W1026 14:15:03.931834  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:03.931861  846424 retry.go:31] will retry after 202.223495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:03.931929  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.386004332s)
	I1026 14:15:03.932280  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.184640366s)
	I1026 14:15:03.932321  846424 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-459729"
	I1026 14:15:03.934515  846424 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 14:15:03.936685  846424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 14:15:03.939543  846424 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:15:03.939568  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:04.011803  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:04.012023  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:04.135249  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1026 14:15:04.302639  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:04.302684  846424 retry.go:31] will retry after 256.294826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 14:15:04.362538  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:04.440917  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:04.541665  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:04.541710  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:04.559817  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:04.939696  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:05.011299  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:05.011458  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:05.440447  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:05.541188  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:05.541273  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:05.940840  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:06.011969  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:06.012243  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:06.362969  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:06.440395  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:06.540977  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:06.541042  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:06.641882  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.506580998s)
	I1026 14:15:06.641952  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.082094284s)
	W1026 14:15:06.641987  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:06.642010  846424 retry.go:31] will retry after 346.725146ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:06.940606  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:06.989704  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:07.011088  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:07.011280  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:07.440961  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:07.542090  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:07.542360  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:07.558417  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:07.558457  846424 retry.go:31] will retry after 465.781456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:07.940090  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:08.011851  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:08.011921  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:08.025028  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:15:08.363131  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:08.439805  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:08.511865  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:08.512205  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:08.582561  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:08.582599  846424 retry.go:31] will retry after 1.449023391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:08.940711  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:09.011541  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:09.011689  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:09.440927  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:09.454842  846424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 14:15:09.454915  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:09.474050  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:09.542099  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:09.542269  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:09.586209  846424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 14:15:09.599936  846424 addons.go:238] Setting addon gcp-auth=true in "addons-459729"
	I1026 14:15:09.600004  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:09.600518  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:09.618865  846424 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 14:15:09.618925  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:09.637719  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:09.738033  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:09.739603  846424 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 14:15:09.741100  846424 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 14:15:09.741126  846424 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 14:15:09.755471  846424 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 14:15:09.755502  846424 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 14:15:09.769570  846424 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:09.769600  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 14:15:09.783135  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:09.940438  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:10.011447  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:10.011724  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:10.032590  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:10.107618  846424 addons.go:479] Verifying addon gcp-auth=true in "addons-459729"
	I1026 14:15:10.109476  846424 out.go:179] * Verifying gcp-auth addon...
	I1026 14:15:10.112303  846424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 14:15:10.115588  846424 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 14:15:10.115614  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:10.441825  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:10.511906  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:10.511972  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:10.611392  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:10.611426  846424 retry.go:31] will retry after 1.80430156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:10.614915  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:10.862859  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:10.939690  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:11.011633  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:11.011841  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:11.116133  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:11.440853  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:11.511600  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:11.511833  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:11.615829  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:11.940803  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:12.011795  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:12.012045  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:12.115725  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:12.416588  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:12.440181  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:12.511462  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:12.511639  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:12.615801  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:12.940325  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:15:12.964755  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:12.964784  846424 retry.go:31] will retry after 1.780244556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:13.011987  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:13.012113  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:13.116258  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:13.363321  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:13.440372  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:13.511266  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:13.511405  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:13.615430  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:13.940076  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:14.012062  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:14.012116  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:14.116253  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:14.440242  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:14.512057  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:14.512338  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:14.615992  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:14.746241  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:14.940674  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:15.011505  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:15.011640  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:15.116328  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:15.316951  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:15.316989  846424 retry.go:31] will retry after 5.440492782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:15.440200  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:15.511134  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:15.511275  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:15.616267  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:15.862887  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:15.939913  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:16.011983  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:16.012134  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:16.116436  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:16.440198  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:16.512498  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:16.512684  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:16.615786  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:16.940627  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:17.011646  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:17.011893  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:17.116034  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:17.440400  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:17.511242  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:17.511408  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:17.616515  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:17.940364  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:18.011130  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:18.011253  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:18.116015  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:18.363065  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:18.440278  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:18.512057  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:18.512257  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:18.616302  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:18.940378  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:19.011296  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:19.011355  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:19.116473  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:19.440955  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:19.511663  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:19.511896  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:19.616320  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:19.940560  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:20.011520  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:20.011797  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:20.115557  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:20.440901  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:20.511988  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:20.512031  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:20.615783  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:20.758096  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:15:20.862647  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:20.940915  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:21.012207  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:21.012289  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:21.117067  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:21.313675  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:21.313707  846424 retry.go:31] will retry after 8.91122247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:21.440656  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:21.511553  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:21.511689  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:21.615625  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:21.940584  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:22.011440  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:22.011654  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:22.115655  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:22.440488  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:22.511406  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:22.511550  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:22.615671  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:22.940358  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:23.011074  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:23.011174  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:23.116377  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:23.363318  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:23.440377  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:23.511345  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:23.511560  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:23.615384  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:23.940379  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:24.011307  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:24.011561  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:24.116091  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:24.440587  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:24.511418  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:24.511646  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:24.615811  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:24.939855  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:25.011683  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:25.011782  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:25.116357  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:25.440984  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:25.511873  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:25.511903  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:25.615664  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:25.862365  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:25.940295  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:26.011232  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:26.011407  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:26.115402  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:26.440446  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:26.511527  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:26.511752  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:26.615540  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:26.940929  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:27.042156  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:27.042322  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:27.142622  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:27.440313  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:27.511616  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:27.511736  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:27.615910  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:27.863296  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:27.940352  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:28.011545  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:28.011563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:28.115289  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:28.440439  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:28.511542  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:28.511612  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:28.615532  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:28.940483  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:29.011417  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:29.011572  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:29.115862  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:29.440528  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:29.511762  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:29.511961  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:29.615472  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:29.863626  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:29.940511  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:30.011347  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:30.011526  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:30.115553  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:30.225751  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:30.440732  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:30.511761  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:30.511809  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:30.615389  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:30.801581  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:30.801612  846424 retry.go:31] will retry after 13.384924225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:30.940459  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:31.011507  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:31.011625  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:31.115351  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:31.440233  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:31.510980  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:31.511100  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:31.616243  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:31.940463  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:32.011513  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:32.011678  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:32.115622  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:32.362628  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:32.440664  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:32.511680  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:32.511737  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:32.615569  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:32.940541  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:33.011546  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:33.011664  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:33.115806  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:33.440064  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:33.512047  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:33.512126  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:33.615997  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:33.939890  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:34.012203  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:34.012266  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:34.116285  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:34.362894  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:34.439794  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:34.511885  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:34.511888  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:34.615635  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:34.940637  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:35.011802  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:35.012036  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:35.116073  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:35.441237  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:35.511039  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:35.511313  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:35.616525  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:35.940536  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:36.011591  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:36.011913  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:36.115551  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:36.440489  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:36.511389  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:36.511596  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:36.615558  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:36.862289  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:36.940212  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:37.011145  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:37.011314  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:37.116406  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:37.440773  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:37.511657  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:37.511746  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:37.615698  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:37.940654  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:38.011611  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:38.011630  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:38.115442  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:38.440259  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:38.511046  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:38.511100  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:38.616066  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:38.863142  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:38.940023  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:39.012065  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:39.012130  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:39.116011  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:39.439627  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:39.511481  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:39.511553  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:39.615371  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:39.940298  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:40.011243  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:40.011419  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:40.115389  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:40.440352  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:40.511019  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:40.511307  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:40.616092  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:40.939667  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:41.011746  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:41.011778  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:41.115465  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:41.363466  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:41.440572  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:41.511456  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:41.511511  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:41.615524  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:41.940641  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:42.011604  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:42.011718  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:42.115753  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:42.440553  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:42.511790  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:42.512024  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:42.615996  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:42.940386  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:43.011106  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:43.011220  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:43.116069  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:43.363260  846424 node_ready.go:49] node "addons-459729" is "Ready"
	I1026 14:15:43.363297  846424 node_ready.go:38] duration metric: took 41.003701767s for node "addons-459729" to be "Ready" ...
	I1026 14:15:43.363317  846424 api_server.go:52] waiting for apiserver process to appear ...
	I1026 14:15:43.363400  846424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:15:43.381711  846424 api_server.go:72] duration metric: took 41.62593283s to wait for apiserver process to appear ...
	I1026 14:15:43.381745  846424 api_server.go:88] waiting for apiserver healthz status ...
	I1026 14:15:43.381771  846424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 14:15:43.386270  846424 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 14:15:43.387310  846424 api_server.go:141] control plane version: v1.34.1
	I1026 14:15:43.387346  846424 api_server.go:131] duration metric: took 5.591629ms to wait for apiserver health ...
	I1026 14:15:43.387357  846424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 14:15:43.390642  846424 system_pods.go:59] 20 kube-system pods found
	I1026 14:15:43.390691  846424 system_pods.go:61] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:43.390702  846424 system_pods.go:61] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:43.390711  846424 system_pods.go:61] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending
	I1026 14:15:43.390716  846424 system_pods.go:61] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending
	I1026 14:15:43.390720  846424 system_pods.go:61] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending
	I1026 14:15:43.390723  846424 system_pods.go:61] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:43.390726  846424 system_pods.go:61] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:43.390729  846424 system_pods.go:61] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:43.390732  846424 system_pods.go:61] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:43.390742  846424 system_pods.go:61] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending
	I1026 14:15:43.390745  846424 system_pods.go:61] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:43.390751  846424 system_pods.go:61] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:43.390756  846424 system_pods.go:61] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:43.390762  846424 system_pods.go:61] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending
	I1026 14:15:43.390784  846424 system_pods.go:61] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:43.390790  846424 system_pods.go:61] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:43.390795  846424 system_pods.go:61] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending
	I1026 14:15:43.390799  846424 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending
	I1026 14:15:43.390802  846424 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending
	I1026 14:15:43.390807  846424 system_pods.go:61] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:43.390818  846424 system_pods.go:74] duration metric: took 3.45377ms to wait for pod list to return data ...
	I1026 14:15:43.390829  846424 default_sa.go:34] waiting for default service account to be created ...
	I1026 14:15:43.394537  846424 default_sa.go:45] found service account: "default"
	I1026 14:15:43.394566  846424 default_sa.go:55] duration metric: took 3.728908ms for default service account to be created ...
	I1026 14:15:43.394579  846424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 14:15:43.398295  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:43.398331  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:43.398340  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:43.398348  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending
	I1026 14:15:43.398354  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending
	I1026 14:15:43.398359  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending
	I1026 14:15:43.398364  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:43.398371  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:43.398377  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:43.398385  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:43.398396  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:43.398405  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:43.398412  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:43.398423  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:43.398432  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending
	I1026 14:15:43.398441  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:43.398452  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:43.398460  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending
	I1026 14:15:43.398466  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending
	I1026 14:15:43.398474  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending
	I1026 14:15:43.398481  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:43.398503  846424 retry.go:31] will retry after 285.578303ms: missing components: kube-dns
	I1026 14:15:43.439988  846424 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:15:43.440011  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:43.511891  846424 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:15:43.511923  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:43.512089  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:43.617305  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:43.720800  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:43.720851  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:43.720871  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:43.720883  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:15:43.720891  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:15:43.720909  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:15:43.720924  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:43.720930  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:43.720951  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:43.720962  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:43.720971  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:43.720984  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:43.720991  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:43.721001  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:43.721016  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:15:43.721023  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:43.721031  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:43.721042  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:15:43.721056  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:43.721066  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:43.721075  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:43.721098  846424 retry.go:31] will retry after 329.971946ms: missing components: kube-dns
	I1026 14:15:43.942121  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:44.012262  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:44.012376  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:44.056065  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:44.056108  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:44.056119  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:44.056129  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:15:44.056139  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:15:44.056147  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:15:44.056153  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:44.056171  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:44.056179  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:44.056184  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:44.056193  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:44.056202  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:44.056209  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:44.056217  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:44.056229  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:15:44.056239  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:44.056251  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:44.056261  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:15:44.056270  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.056276  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.056281  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:44.056300  846424 retry.go:31] will retry after 468.560484ms: missing components: kube-dns
	I1026 14:15:44.117136  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:44.187375  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:44.441427  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:44.511423  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:44.511459  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:44.530251  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:44.530287  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:44.530295  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Running
	I1026 14:15:44.530306  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:15:44.530314  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:15:44.530323  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:15:44.530329  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:44.530334  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:44.530344  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:44.530350  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:44.530361  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:44.530366  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:44.530376  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:44.530383  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:44.530396  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:15:44.530415  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:44.530428  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:44.530438  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:15:44.530446  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.530456  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.530462  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Running
	I1026 14:15:44.530472  846424 system_pods.go:126] duration metric: took 1.135885614s to wait for k8s-apps to be running ...
	I1026 14:15:44.530482  846424 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 14:15:44.530536  846424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:15:44.616031  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:44.908118  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:44.908188  846424 retry.go:31] will retry after 19.716620035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:44.908207  846424 system_svc.go:56] duration metric: took 377.714352ms WaitForService to wait for kubelet
	I1026 14:15:44.908230  846424 kubeadm.go:586] duration metric: took 43.152458642s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:15:44.908250  846424 node_conditions.go:102] verifying NodePressure condition ...
	I1026 14:15:44.911337  846424 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 14:15:44.911364  846424 node_conditions.go:123] node cpu capacity is 8
	I1026 14:15:44.911397  846424 node_conditions.go:105] duration metric: took 3.140307ms to run NodePressure ...
	I1026 14:15:44.911413  846424 start.go:241] waiting for startup goroutines ...
	I1026 14:15:44.940945  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:45.011805  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:45.011886  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:45.116285  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:45.440843  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:45.513224  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:45.513412  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:45.616570  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:45.942361  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:46.012675  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:46.013528  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:46.117474  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:46.441947  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:46.512341  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:46.512535  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:46.616794  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:46.940470  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:47.011869  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:47.011931  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:47.116563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:47.440841  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:47.512115  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:47.512220  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:47.616287  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:47.941835  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:48.012250  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:48.012341  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:48.116362  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:48.441006  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:48.512345  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:48.512356  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:48.616602  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:48.940607  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:49.011849  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.012016  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:49.116379  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:49.440767  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:49.511952  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.511976  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:49.616460  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:49.941743  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.012219  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.012379  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.116868  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:50.441466  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.513107  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.515804  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.616823  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:50.941030  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:51.012636  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.012725  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.115494  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:51.441781  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:51.512071  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.512308  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.616593  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:51.940965  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.012409  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.012488  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.115563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:52.443333  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.511231  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.511257  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.616100  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:52.940805  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.012226  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.012332  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.116967  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:53.440270  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.511122  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.511202  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.615637  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:53.940600  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:54.011614  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.011714  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.116924  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.441210  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:54.512865  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.513048  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.617005  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.940859  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.012113  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.012224  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:55.116663  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.441153  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.512196  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.512413  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:55.616687  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.940578  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:56.011673  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.011694  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.115735  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.440993  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:56.512523  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.512651  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.615678  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.940579  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.041197  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.041223  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:57.116077  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.441079  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.541533  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.541819  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:57.615870  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.940598  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.011563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.011563  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.115144  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.441032  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.511836  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.511904  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.616428  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.941373  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.012293  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.012340  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.115780  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:59.440337  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.511639  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.511739  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.616062  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:59.940947  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:00.012642  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.012851  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.116169  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.441026  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:00.512261  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.512331  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.616120  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.941108  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.012439  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.012548  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.115880  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.440399  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.511194  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.511239  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.616484  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.940063  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:02.012035  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.012152  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.116524  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:02.440114  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:02.511694  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.511958  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.616287  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:02.940613  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.011613  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.011834  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.115994  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.440884  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.541763  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.541805  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.615582  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.939875  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.012997  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.012997  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.116050  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.440720  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.541348  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.541372  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.616022  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.625105  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:04.940631  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:05.011348  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.011465  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.116030  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:05.176887  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:05.176928  846424 retry.go:31] will retry after 26.54487401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:05.441807  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:05.511995  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.512004  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.617230  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:05.944172  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.012901  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.014375  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.116181  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.473520  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.678507  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.678878  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.678919  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.943847  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:07.014324  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.015611  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.115649  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.440832  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:07.512429  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.512442  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.616798  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.941418  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:08.042703  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.042726  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.116002  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.440870  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:08.512242  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.512318  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.617191  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.940574  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.011290  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.011501  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.116370  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.441256  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.512541  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.512777  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.615743  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.940529  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:10.017652  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.017858  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.151413  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.551948  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.552021  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.552037  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:10.615849  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.940842  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.012102  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.012263  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.116194  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.440707  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.511926  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.511992  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.616604  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.959290  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:12.081235  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.081274  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.243116  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.442118  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:12.542025  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.542035  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.615889  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.940336  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.011966  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.012041  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.116222  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.441408  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.511717  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.511795  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.616459  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.941451  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:14.011632  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:14.011677  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.115643  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.440266  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:14.512541  846424 kapi.go:107] duration metric: took 1m11.004457602s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 14:16:14.512727  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.616135  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.941020  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.011868  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.116053  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.441317  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.512641  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.616897  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.940327  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:16.012834  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.116231  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:16.554332  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:16.554383  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.702636  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:16.941451  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.042536  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.115437  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.440730  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.512822  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.615703  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.940235  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:18.011654  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.115582  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.442174  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:18.512192  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.615605  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.940832  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.012192  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.116052  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:19.445141  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.512920  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.616513  846424 kapi.go:107] duration metric: took 1m9.504207447s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 14:16:19.618361  846424 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-459729 cluster.
	I1026 14:16:19.619952  846424 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 14:16:19.621190  846424 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 14:16:19.941420  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.012984  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.467066  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.512438  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.941556  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:21.011657  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.440326  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:21.512699  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.940391  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.013074  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.441421  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.512726  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.941511  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.011427  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:23.441692  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.541961  846424 kapi.go:107] duration metric: took 1m20.033815029s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 14:16:23.939860  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.441033  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.940666  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.441106  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.949894  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.440937  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.940765  846424 kapi.go:107] duration metric: took 1m23.004082526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 14:16:31.723355  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:16:32.268610  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 14:16:32.268728  846424 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 14:16:32.270314  846424 out.go:179] * Enabled addons: storage-provisioner, ingress-dns, amd-gpu-device-plugin, registry-creds, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1026 14:16:32.271316  846424 addons.go:514] duration metric: took 1m30.515515484s for enable addons: enabled=[storage-provisioner ingress-dns amd-gpu-device-plugin registry-creds nvidia-device-plugin cloud-spanner metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1026 14:16:32.271351  846424 start.go:246] waiting for cluster config update ...
	I1026 14:16:32.271372  846424 start.go:255] writing updated cluster config ...
	I1026 14:16:32.271625  846424 ssh_runner.go:195] Run: rm -f paused
	I1026 14:16:32.275601  846424 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:16:32.278988  846424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-58kmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.283800  846424 pod_ready.go:94] pod "coredns-66bc5c9577-58kmh" is "Ready"
	I1026 14:16:32.283832  846424 pod_ready.go:86] duration metric: took 4.822784ms for pod "coredns-66bc5c9577-58kmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.285751  846424 pod_ready.go:83] waiting for pod "etcd-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.289426  846424 pod_ready.go:94] pod "etcd-addons-459729" is "Ready"
	I1026 14:16:32.289447  846424 pod_ready.go:86] duration metric: took 3.67723ms for pod "etcd-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.291471  846424 pod_ready.go:83] waiting for pod "kube-apiserver-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.295066  846424 pod_ready.go:94] pod "kube-apiserver-addons-459729" is "Ready"
	I1026 14:16:32.295090  846424 pod_ready.go:86] duration metric: took 3.601221ms for pod "kube-apiserver-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.297016  846424 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.680183  846424 pod_ready.go:94] pod "kube-controller-manager-addons-459729" is "Ready"
	I1026 14:16:32.680220  846424 pod_ready.go:86] duration metric: took 383.185277ms for pod "kube-controller-manager-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.879387  846424 pod_ready.go:83] waiting for pod "kube-proxy-2f7sr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.279767  846424 pod_ready.go:94] pod "kube-proxy-2f7sr" is "Ready"
	I1026 14:16:33.279836  846424 pod_ready.go:86] duration metric: took 400.42041ms for pod "kube-proxy-2f7sr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.480448  846424 pod_ready.go:83] waiting for pod "kube-scheduler-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.880276  846424 pod_ready.go:94] pod "kube-scheduler-addons-459729" is "Ready"
	I1026 14:16:33.880305  846424 pod_ready.go:86] duration metric: took 399.829511ms for pod "kube-scheduler-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.880320  846424 pod_ready.go:40] duration metric: took 1.604687476s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:16:33.928054  846424 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 14:16:33.930783  846424 out.go:179] * Done! kubectl is now configured to use "addons-459729" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 14:20:42 addons-459729 crio[770]: time="2025-10-26T14:20:42.087589177Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=0f3c14b8-0d26-4ed3-9ea4-e76e73d03a76 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:42 addons-459729 crio[770]: time="2025-10-26T14:20:42.087815504Z" level=info msg="Image docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 not found" id=0f3c14b8-0d26-4ed3-9ea4-e76e73d03a76 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:42 addons-459729 crio[770]: time="2025-10-26T14:20:42.087872021Z" level=info msg="Neither image nor artfiact docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 found" id=0f3c14b8-0d26-4ed3-9ea4-e76e73d03a76 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:21:12 addons-459729 crio[770]: time="2025-10-26T14:21:12.067436435Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 26 14:21:42 addons-459729 crio[770]: time="2025-10-26T14:21:42.726799075Z" level=info msg="Pulling image: docker.io/nginx:latest" id=b6e710d0-3a93-4dc9-9f71-182aeb800e9e name=/runtime.v1.ImageService/PullImage
	Oct 26 14:21:42 addons-459729 crio[770]: time="2025-10-26T14:21:42.731407769Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:21:57 addons-459729 crio[770]: time="2025-10-26T14:21:57.087342761Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7a15b32e-5427-4d25-93dd-fb37baddab5c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:21:57 addons-459729 crio[770]: time="2025-10-26T14:21:57.087527648Z" level=info msg="Image docker.io/nginx:alpine not found" id=7a15b32e-5427-4d25-93dd-fb37baddab5c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:21:57 addons-459729 crio[770]: time="2025-10-26T14:21:57.087577833Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=7a15b32e-5427-4d25-93dd-fb37baddab5c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:22:11 addons-459729 crio[770]: time="2025-10-26T14:22:11.087632039Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=82dddf71-2445-4531-b861-2b4dddee1b41 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:22:11 addons-459729 crio[770]: time="2025-10-26T14:22:11.087829351Z" level=info msg="Image docker.io/nginx:alpine not found" id=82dddf71-2445-4531-b861-2b4dddee1b41 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:22:11 addons-459729 crio[770]: time="2025-10-26T14:22:11.087874122Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=82dddf71-2445-4531-b861-2b4dddee1b41 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:22:13 addons-459729 crio[770]: time="2025-10-26T14:22:13.385704852Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:22:44 addons-459729 crio[770]: time="2025-10-26T14:22:44.062377726Z" level=info msg="Trying to access \"docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8\""
	Oct 26 14:23:14 addons-459729 crio[770]: time="2025-10-26T14:23:14.709509897Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=6e6dcf86-ba35-4a89-8d1b-79ee4921ff12 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:23:14 addons-459729 crio[770]: time="2025-10-26T14:23:14.714333317Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 26 14:23:45 addons-459729 crio[770]: time="2025-10-26T14:23:45.355505812Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 26 14:24:16 addons-459729 crio[770]: time="2025-10-26T14:24:16.001500692Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=6ecd8bc2-cc61-46e1-832e-350f96c00190 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:24:16 addons-459729 crio[770]: time="2025-10-26T14:24:16.019039132Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 26 14:24:27 addons-459729 crio[770]: time="2025-10-26T14:24:27.087104298Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=48fd2d91-b72d-4f6b-a47d-a5c18489f2da name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:24:27 addons-459729 crio[770]: time="2025-10-26T14:24:27.087378093Z" level=info msg="Image docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 not found" id=48fd2d91-b72d-4f6b-a47d-a5c18489f2da name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:24:27 addons-459729 crio[770]: time="2025-10-26T14:24:27.087430388Z" level=info msg="Neither image nor artfiact docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 found" id=48fd2d91-b72d-4f6b-a47d-a5c18489f2da name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:24:40 addons-459729 crio[770]: time="2025-10-26T14:24:40.087752802Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=8e9880e2-b7f1-4a8b-bcd6-cc684cf69f1b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:24:40 addons-459729 crio[770]: time="2025-10-26T14:24:40.088014407Z" level=info msg="Image docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 not found" id=8e9880e2-b7f1-4a8b-bcd6-cc684cf69f1b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:24:40 addons-459729 crio[770]: time="2025-10-26T14:24:40.088088224Z" level=info msg="Neither image nor artfiact docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 found" id=8e9880e2-b7f1-4a8b-bcd6-cc684cf69f1b name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	27b70ccf2a2bc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 minutes ago       Running             busybox                                  0                   4df2b4b18d117       busybox                                     default
	19aef1ec8510c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          8 minutes ago       Running             csi-snapshotter                          0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	61a5097a66804       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          8 minutes ago       Running             csi-provisioner                          0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	621ed44d4d0c9       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            8 minutes ago       Running             liveness-probe                           0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	423188941aea4       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             8 minutes ago       Running             controller                               0                   5f8435b6e04f2       ingress-nginx-controller-675c5ddd98-5ppwr   ingress-nginx
	441d937b8068c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 8 minutes ago       Running             gcp-auth                                 0                   323c55def826a       gcp-auth-78565c9fb4-5728j                   gcp-auth
	0957c0a36894a       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           8 minutes ago       Running             hostpath                                 0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	066ff52c2ddcd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            8 minutes ago       Running             gadget                                   0                   4eb2ecaed9e87       gadget-kzxfz                                gadget
	3552d128c67c5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                8 minutes ago       Running             node-driver-registrar                    0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	97c4cd86f30ed       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             8 minutes ago       Exited              patch                                    2                   abda503e132df       ingress-nginx-admission-patch-tpf9p         ingress-nginx
	e0688bdc55e0b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              8 minutes ago       Running             registry-proxy                           0                   e7362f18db413       registry-proxy-cs2k2                        kube-system
	0f54646dd806e       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     8 minutes ago       Running             nvidia-device-plugin-ctr                 0                   c4c36c0bc4659       nvidia-device-plugin-daemonset-24shm        kube-system
	83682e4a110f1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   8 minutes ago       Running             csi-external-health-monitor-controller   0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	ea6861a45ac70       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     8 minutes ago       Running             amd-gpu-device-plugin                    0                   c6d4e2f783cad       amd-gpu-device-plugin-cpl45                 kube-system
	0314c0bc382ed       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   00d15442e9fe3       snapshot-controller-7d9fbc56b8-d9lzl        kube-system
	8362d34d3550e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   8 minutes ago       Exited              create                                   0                   f3bf9fde8769c       ingress-nginx-admission-create-6rf28        ingress-nginx
	12266be6b9ab3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   9ef34a2a027ac       snapshot-controller-7d9fbc56b8-wrh9q        kube-system
	7c8dc6d14b139       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             8 minutes ago       Running             csi-attacher                             0                   7976607f84d97       csi-hostpath-attacher-0                     kube-system
	e712266799f11       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           8 minutes ago       Running             registry                                 0                   f1316c3452f72       registry-6b586f9694-ds6k9                   kube-system
	c3bf40d60ab5e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              8 minutes ago       Running             csi-resizer                              0                   2cae445dde2d5       csi-hostpath-resizer-0                      kube-system
	b63192b7f745f       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              8 minutes ago       Running             yakd                                     0                   2c72fc205123b       yakd-dashboard-5ff678cb9-dn24s              yakd-dashboard
	c19ddca298d1e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             9 minutes ago       Running             local-path-provisioner                   0                   1bf1c34fc4541       local-path-provisioner-648f6765c9-zlb8q     local-path-storage
	1c530a50ccecc       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               9 minutes ago       Running             cloud-spanner-emulator                   0                   dfaf3d25c7f4b       cloud-spanner-emulator-86bd5cbb97-xfwfj     default
	db7c2a98e81df       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               9 minutes ago       Running             minikube-ingress-dns                     0                   52ca272c9227c       kube-ingress-dns-minikube                   kube-system
	9bd2912e692dc       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        9 minutes ago       Running             metrics-server                           0                   7b5aa0bab6500       metrics-server-85b7d694d7-g2nwm             kube-system
	ea11dd25ee99e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             9 minutes ago       Running             coredns                                  0                   b9bf05c027e23       coredns-66bc5c9577-58kmh                    kube-system
	6ec65c531ce9b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             9 minutes ago       Running             storage-provisioner                      0                   7e2edd03c74dd       storage-provisioner                         kube-system
	4f25f66b4cedf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             9 minutes ago       Running             kube-proxy                               0                   a6c25e9b56e3a       kube-proxy-2f7sr                            kube-system
	a0eba15d448be       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             9 minutes ago       Running             kindnet-cni                              0                   84e022be55df3       kindnet-qskcd                               kube-system
	c2b16514601ac       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             10 minutes ago      Running             kube-controller-manager                  0                   b6986a1a2b4b0       kube-controller-manager-addons-459729       kube-system
	102e7dda91245       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             10 minutes ago      Running             kube-scheduler                           0                   79e5b59eeb1c5       kube-scheduler-addons-459729                kube-system
	4150a83c0db93       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             10 minutes ago      Running             etcd                                     0                   d6e35f5ca53c8       etcd-addons-459729                          kube-system
	7a9a679c5c891       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             10 minutes ago      Running             kube-apiserver                           0                   d283821e23e4a       kube-apiserver-addons-459729                kube-system
	
	
	==> coredns [ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b] <==
	[INFO] 10.244.0.17:43132 - 21631 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000122886s
	[INFO] 10.244.0.17:55984 - 62108 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000085792s
	[INFO] 10.244.0.17:55984 - 62274 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000172297s
	[INFO] 10.244.0.17:59534 - 46029 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000085485s
	[INFO] 10.244.0.17:59534 - 45635 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000130065s
	[INFO] 10.244.0.17:35492 - 64690 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118967s
	[INFO] 10.244.0.17:35492 - 64268 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152403s
	[INFO] 10.244.0.21:54006 - 22748 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194069s
	[INFO] 10.244.0.21:45352 - 54900 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00026904s
	[INFO] 10.244.0.21:38334 - 25222 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129109s
	[INFO] 10.244.0.21:34539 - 64672 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000226506s
	[INFO] 10.244.0.21:59972 - 30687 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125292s
	[INFO] 10.244.0.21:34145 - 41111 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153861s
	[INFO] 10.244.0.21:52994 - 11684 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003138228s
	[INFO] 10.244.0.21:36916 - 32432 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004561076s
	[INFO] 10.244.0.21:50024 - 33145 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.003880565s
	[INFO] 10.244.0.21:48825 - 39484 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.0061693s
	[INFO] 10.244.0.21:56944 - 27445 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004052333s
	[INFO] 10.244.0.21:39046 - 54424 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005945025s
	[INFO] 10.244.0.21:51579 - 13184 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004308646s
	[INFO] 10.244.0.21:39799 - 50681 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005368105s
	[INFO] 10.244.0.21:57974 - 51048 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001082611s
	[INFO] 10.244.0.21:51671 - 13280 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001179932s
	[INFO] 10.244.0.26:58819 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000293264s
	[INFO] 10.244.0.26:35243 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189185s
	
	
	==> describe nodes <==
	Name:               addons-459729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-459729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=addons-459729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_14_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-459729
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-459729"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:14:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-459729
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:24:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:24:06 +0000   Sun, 26 Oct 2025 14:14:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:24:06 +0000   Sun, 26 Oct 2025 14:14:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:24:06 +0000   Sun, 26 Oct 2025 14:14:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:24:06 +0000   Sun, 26 Oct 2025 14:15:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-459729
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f0596a61-354d-402e-9406-4163a5db7e7d
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  default                     cloud-spanner-emulator-86bd5cbb97-xfwfj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  gadget                      gadget-kzxfz                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  gcp-auth                    gcp-auth-78565c9fb4-5728j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-5ppwr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         9m50s
	  kube-system                 amd-gpu-device-plugin-cpl45                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 coredns-66bc5c9577-58kmh                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m52s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 csi-hostpathplugin-86x7s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 etcd-addons-459729                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m58s
	  kube-system                 kindnet-qskcd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m52s
	  kube-system                 kube-apiserver-addons-459729                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 kube-controller-manager-addons-459729        200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 kube-proxy-2f7sr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 kube-scheduler-addons-459729                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 metrics-server-85b7d694d7-g2nwm              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         9m50s
	  kube-system                 nvidia-device-plugin-daemonset-24shm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 registry-6b586f9694-ds6k9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 registry-creds-764b6fb674-dk4lc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 registry-proxy-cs2k2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 snapshot-controller-7d9fbc56b8-d9lzl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-wrh9q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  local-path-storage          local-path-provisioner-648f6765c9-zlb8q      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-dn24s               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m50s  kube-proxy       
	  Normal  Starting                 9m57s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m57s  kubelet          Node addons-459729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m57s  kubelet          Node addons-459729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m57s  kubelet          Node addons-459729 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m53s  node-controller  Node addons-459729 event: Registered Node addons-459729 in Controller
	  Normal  NodeReady                9m10s  kubelet          Node addons-459729 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a] <==
	{"level":"warn","ts":"2025-10-26T14:15:30.731839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.749203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.756191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:06.673503Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.362269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:06.673597Z","caller":"traceutil/trace.go:172","msg":"trace[1407356744] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1094; }","duration":"162.491663ms","start":"2025-10-26T14:16:06.511089Z","end":"2025-10-26T14:16:06.673580Z","steps":["trace[1407356744] 'agreement among raft nodes before linearized reading'  (duration: 44.429446ms)","trace[1407356744] 'range keys from in-memory index tree'  (duration: 117.89894ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:06.675114Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.362867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040893471723429 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-nt254\" mod_revision:1091 > success:<request_put:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-nt254\" value_size:4081 >> failure:<request_range:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-nt254\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T14:16:06.675395Z","caller":"traceutil/trace.go:172","msg":"trace[1217944354] linearizableReadLoop","detail":"{readStateIndex:1125; appliedIndex:1124; }","duration":"119.89607ms","start":"2025-10-26T14:16:06.555484Z","end":"2025-10-26T14:16:06.675380Z","steps":["trace[1217944354] 'read index received'  (duration: 18.538µs)","trace[1217944354] 'applied index is now lower than readState.Index'  (duration: 119.876207ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T14:16:06.675424Z","caller":"traceutil/trace.go:172","msg":"trace[470063816] transaction","detail":"{read_only:false; response_revision:1095; number_of_response:1; }","duration":"196.815195ms","start":"2025-10-26T14:16:06.478586Z","end":"2025-10-26T14:16:06.675401Z","steps":["trace[470063816] 'process raft request'  (duration: 76.965623ms)","trace[470063816] 'compare'  (duration: 117.805435ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:06.675523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.37334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:06.675739Z","caller":"traceutil/trace.go:172","msg":"trace[1813938213] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"164.587149ms","start":"2025-10-26T14:16:06.511134Z","end":"2025-10-26T14:16:06.675722Z","steps":["trace[1813938213] 'agreement among raft nodes before linearized reading'  (duration: 164.337405ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T14:16:06.839498Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.209135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-26T14:16:06.839705Z","caller":"traceutil/trace.go:172","msg":"trace[1627999609] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"157.511557ms","start":"2025-10-26T14:16:06.682155Z","end":"2025-10-26T14:16:06.839666Z","steps":["trace[1627999609] 'process raft request'  (duration: 113.355136ms)","trace[1627999609] 'compare'  (duration: 43.875174ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T14:16:06.839935Z","caller":"traceutil/trace.go:172","msg":"trace[222252180] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1095; }","duration":"134.546756ms","start":"2025-10-26T14:16:06.705111Z","end":"2025-10-26T14:16:06.839657Z","steps":["trace[222252180] 'agreement among raft nodes before linearized reading'  (duration: 90.30554ms)","trace[222252180] 'range keys from in-memory index tree'  (duration: 43.778269ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:10.550138Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.904128ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040893471723491 > lease_revoke:<id:70cc9a20df3b0e67>","response":"size:29"}
	{"level":"info","ts":"2025-10-26T14:16:10.550263Z","caller":"traceutil/trace.go:172","msg":"trace[486118013] linearizableReadLoop","detail":"{readStateIndex:1137; appliedIndex:1136; }","duration":"110.54143ms","start":"2025-10-26T14:16:10.439705Z","end":"2025-10-26T14:16:10.550246Z","steps":["trace[486118013] 'read index received'  (duration: 38.875µs)","trace[486118013] 'applied index is now lower than readState.Index'  (duration: 110.501597ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:10.550396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.681137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:10.550436Z","caller":"traceutil/trace.go:172","msg":"trace[1478287923] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1106; }","duration":"110.733039ms","start":"2025-10-26T14:16:10.439691Z","end":"2025-10-26T14:16:10.550424Z","steps":["trace[1478287923] 'agreement among raft nodes before linearized reading'  (duration: 110.638778ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:16:16.552003Z","caller":"traceutil/trace.go:172","msg":"trace[1516111140] linearizableReadLoop","detail":"{readStateIndex:1173; appliedIndex:1173; }","duration":"112.70591ms","start":"2025-10-26T14:16:16.439268Z","end":"2025-10-26T14:16:16.551974Z","steps":["trace[1516111140] 'read index received'  (duration: 112.69236ms)","trace[1516111140] 'applied index is now lower than readState.Index'  (duration: 11.711µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:16.552177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.877721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:16.552215Z","caller":"traceutil/trace.go:172","msg":"trace[102515432] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1141; }","duration":"112.949469ms","start":"2025-10-26T14:16:16.439258Z","end":"2025-10-26T14:16:16.552208Z","steps":["trace[102515432] 'agreement among raft nodes before linearized reading'  (duration: 112.841453ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:16:16.552194Z","caller":"traceutil/trace.go:172","msg":"trace[1700144795] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"131.964726ms","start":"2025-10-26T14:16:16.420209Z","end":"2025-10-26T14:16:16.552174Z","steps":["trace[1700144795] 'process raft request'  (duration: 131.800273ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:16:16.701205Z","caller":"traceutil/trace.go:172","msg":"trace[808680941] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"143.197143ms","start":"2025-10-26T14:16:16.557989Z","end":"2025-10-26T14:16:16.701187Z","steps":["trace[808680941] 'process raft request'  (duration: 143.039766ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:24:52.833269Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1642}
	{"level":"info","ts":"2025-10-26T14:24:52.858819Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1642,"took":"24.88994ms","hash":3214463440,"current-db-size-bytes":5652480,"current-db-size":"5.7 MB","current-db-size-in-use-bytes":3538944,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2025-10-26T14:24:52.858870Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3214463440,"revision":1642,"compact-revision":-1}
	
	
	==> gcp-auth [441d937b8068cc86fcb3a873cae9bcb6e3f4a3e79071a803935c38b3f14746aa] <==
	2025/10/26 14:16:19 GCP Auth Webhook started!
	2025/10/26 14:16:34 Ready to marshal response ...
	2025/10/26 14:16:34 Ready to write response ...
	2025/10/26 14:16:34 Ready to marshal response ...
	2025/10/26 14:16:34 Ready to write response ...
	2025/10/26 14:16:34 Ready to marshal response ...
	2025/10/26 14:16:34 Ready to write response ...
	2025/10/26 14:16:43 Ready to marshal response ...
	2025/10/26 14:16:43 Ready to write response ...
	2025/10/26 14:16:43 Ready to marshal response ...
	2025/10/26 14:16:43 Ready to write response ...
	2025/10/26 14:16:51 Ready to marshal response ...
	2025/10/26 14:16:51 Ready to write response ...
	2025/10/26 14:16:51 Ready to marshal response ...
	2025/10/26 14:16:51 Ready to write response ...
	2025/10/26 14:16:52 Ready to marshal response ...
	2025/10/26 14:16:52 Ready to write response ...
	2025/10/26 14:16:58 Ready to marshal response ...
	2025/10/26 14:16:58 Ready to write response ...
	
	
	==> kernel <==
	 14:24:54 up  2:07,  0 user,  load average: 0.06, 0.29, 0.83
	Linux addons-459729 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702] <==
	I1026 14:22:52.854282       1 main.go:301] handling current node
	I1026 14:23:02.861708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:23:02.861748       1 main.go:301] handling current node
	I1026 14:23:12.855880       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:23:12.855920       1 main.go:301] handling current node
	I1026 14:23:22.854954       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:23:22.854990       1 main.go:301] handling current node
	I1026 14:23:32.853943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:23:32.853975       1 main.go:301] handling current node
	I1026 14:23:42.855125       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:23:42.855198       1 main.go:301] handling current node
	I1026 14:23:52.856018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:23:52.856062       1 main.go:301] handling current node
	I1026 14:24:02.856857       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:24:02.856896       1 main.go:301] handling current node
	I1026 14:24:12.859603       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:24:12.859634       1 main.go:301] handling current node
	I1026 14:24:22.863711       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:24:22.863746       1 main.go:301] handling current node
	I1026 14:24:32.854963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:24:32.855022       1 main.go:301] handling current node
	I1026 14:24:42.854924       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:24:42.854971       1 main.go:301] handling current node
	I1026 14:24:52.854785       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:24:52.854892       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 14:15:47.285903       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.72.119:443: connect: connection refused" logger="UnhandledError"
	W1026 14:15:48.288076       1 handler_proxy.go:99] no RequestInfo found in the context
	W1026 14:15:48.288110       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:15:48.288150       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 14:15:48.288194       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1026 14:15:48.288197       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 14:15:48.289343       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 14:15:52.296856       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:15:52.296916       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 14:15:52.297001       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1026 14:15:52.305409       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 14:16:40.620694       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43486: use of closed network connection
	E1026 14:16:40.776236       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43516: use of closed network connection
	I1026 14:16:51.862047       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 14:16:52.188280       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.229.60"}
	I1026 14:24:53.696482       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c] <==
	I1026 14:15:00.709179       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:15:00.709318       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:15:00.709123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 14:15:00.709809       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:15:00.711675       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:00.711687       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:15:00.713528       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 14:15:00.714754       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:00.716565       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 14:15:00.716650       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 14:15:00.716691       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 14:15:00.716697       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 14:15:00.716703       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 14:15:00.717979       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 14:15:00.723523       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-459729" podCIDRs=["10.244.0.0/24"]
	I1026 14:15:00.729033       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 14:15:03.029718       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1026 14:15:30.716451       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 14:15:30.716588       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1026 14:15:30.716640       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 14:15:30.737460       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1026 14:15:30.741504       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1026 14:15:30.817050       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:30.842433       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:15:45.647726       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78] <==
	I1026 14:15:02.702855       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:15:02.996001       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:15:03.096217       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:15:03.096266       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:15:03.096360       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:15:03.183548       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:15:03.183613       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:15:03.194275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:15:03.197537       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:15:03.197760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:15:03.199843       1 config.go:200] "Starting service config controller"
	I1026 14:15:03.200789       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:15:03.200404       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:15:03.200979       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:15:03.200421       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:15:03.200995       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:15:03.201046       1 config.go:309] "Starting node config controller"
	I1026 14:15:03.201051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:15:03.201056       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:15:03.301636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:15:03.301650       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:15:03.301679       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed] <==
	E1026 14:14:53.714707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 14:14:53.714721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 14:14:53.714925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:14:53.715056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:14:53.715252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:14:53.715267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:14:53.715339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 14:14:53.715410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:14:53.715473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:14:53.715543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:14:53.715570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:14:53.716376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 14:14:54.598092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:14:54.611465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:14:54.687609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 14:14:54.701877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:14:54.779666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:14:54.787848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 14:14:54.799124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:14:54.827579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:14:54.851711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 14:14:54.882786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:14:54.883667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 14:14:54.953839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1026 14:14:57.411028       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:21:01 addons-459729 kubelet[1307]: I1026 14:21:01.086412    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cs2k2" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:21:21 addons-459729 kubelet[1307]: I1026 14:21:21.086836    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cpl45" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:21:42 addons-459729 kubelet[1307]: E1026 14:21:42.726327    1307 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 26 14:21:42 addons-459729 kubelet[1307]: E1026 14:21:42.726395    1307 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 26 14:21:42 addons-459729 kubelet[1307]: E1026 14:21:42.726640    1307 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(d99505c9-bb9c-4c52-90e0-9ab7033b32bf): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 26 14:21:42 addons-459729 kubelet[1307]: E1026 14:21:42.726713    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d99505c9-bb9c-4c52-90e0-9ab7033b32bf"
	Oct 26 14:21:57 addons-459729 kubelet[1307]: E1026 14:21:57.088040    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d99505c9-bb9c-4c52-90e0-9ab7033b32bf"
	Oct 26 14:22:09 addons-459729 kubelet[1307]: I1026 14:22:09.086942    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-24shm" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:22:24 addons-459729 kubelet[1307]: I1026 14:22:24.086732    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cs2k2" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:22:45 addons-459729 kubelet[1307]: I1026 14:22:45.086405    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cpl45" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:23:14 addons-459729 kubelet[1307]: E1026 14:23:14.708980    1307 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 26 14:23:14 addons-459729 kubelet[1307]: E1026 14:23:14.709057    1307 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 26 14:23:14 addons-459729 kubelet[1307]: E1026 14:23:14.709301    1307 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(4bad36d2-59a2-4ff8-b30a-5b4bfd7f204f): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 26 14:23:14 addons-459729 kubelet[1307]: E1026 14:23:14.709376    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="4bad36d2-59a2-4ff8-b30a-5b4bfd7f204f"
	Oct 26 14:23:18 addons-459729 kubelet[1307]: I1026 14:23:18.086827    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-24shm" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:23:28 addons-459729 kubelet[1307]: I1026 14:23:28.086418    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cs2k2" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:23:28 addons-459729 kubelet[1307]: E1026 14:23:28.086965    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="4bad36d2-59a2-4ff8-b30a-5b4bfd7f204f"
	Oct 26 14:24:13 addons-459729 kubelet[1307]: I1026 14:24:13.086903    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cpl45" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:24:16 addons-459729 kubelet[1307]: E1026 14:24:16.000872    1307 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585cc
d7d605"
	Oct 26 14:24:16 addons-459729 kubelet[1307]: E1026 14:24:16.000952    1307 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605"
	Oct 26 14:24:16 addons-459729 kubelet[1307]: E1026 14:24:16.001229    1307 kuberuntime_manager.go:1449] "Unhandled Error" err="container registry-creds start failed in pod registry-creds-764b6fb674-dk4lc_kube-system(11a2adc0-f603-426f-af30-919a48eee4bc): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="Unh
andledError"
	Oct 26 14:24:16 addons-459729 kubelet[1307]: E1026 14:24:16.001309    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-creds-764b6fb674-dk4lc" podUID="11a2adc0-f603-42
6f-af30-919a48eee4bc"
	Oct 26 14:24:27 addons-459729 kubelet[1307]: E1026 14:24:27.087807    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You hav
e reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-creds-764b6fb674-dk4lc" podUID="11a2adc0-f603-426f-af30-919a48eee4bc"
	Oct 26 14:24:34 addons-459729 kubelet[1307]: I1026 14:24:34.086864    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-24shm" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:24:41 addons-459729 kubelet[1307]: I1026 14:24:41.086682    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cs2k2" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539] <==
	W1026 14:24:30.112899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:32.115810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:32.120629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:34.123826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:34.128950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:36.132393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:36.136240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:38.139430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:38.143371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:40.147073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:40.151055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:42.154332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:42.159458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:44.162791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:44.167096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:46.171042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:46.175102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:48.178569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:48.183850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:50.186896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:50.191236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:52.194301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:52.199964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:54.203807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:24:54.207953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-459729 -n addons-459729
helpers_test.go:269: (dbg) Run:  kubectl --context addons-459729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-459729 describe pod nginx task-pv-pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-459729 describe pod nginx task-pv-pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc: exit status 1 (79.952556ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-459729/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:16:52 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwdp7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwdp7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m2s                  default-scheduler  Successfully assigned default/nginx to addons-459729
	  Warning  Failed     3m12s (x2 over 7m1s)  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m12s (x2 over 7m1s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2m57s (x2 over 7m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m57s (x2 over 7m1s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m43s (x3 over 8m2s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-459729/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:16:58 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhr62 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-vhr62:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  7m56s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-459729
	  Warning  Failed     100s (x2 over 5m29s)  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     100s (x2 over 5m29s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    86s (x2 over 5m28s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     86s (x2 over 5m28s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    75s (x3 over 7m55s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6rf28" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tpf9p" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-dk4lc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-459729 describe pod nginx task-pv-pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (259.001726ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:24:54.967249  864037 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:24:54.967536  864037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:24:54.967547  864037 out.go:374] Setting ErrFile to fd 2...
	I1026 14:24:54.967551  864037 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:24:54.967820  864037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:24:54.968185  864037 mustload.go:65] Loading cluster: addons-459729
	I1026 14:24:54.968582  864037 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:24:54.968603  864037 addons.go:606] checking whether the cluster is paused
	I1026 14:24:54.968704  864037 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:24:54.968730  864037 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:24:54.969216  864037 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:24:54.987525  864037 ssh_runner.go:195] Run: systemctl --version
	I1026 14:24:54.987597  864037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:24:55.006329  864037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:24:55.106198  864037 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:24:55.106289  864037 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:24:55.138125  864037 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:24:55.138146  864037 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:24:55.138150  864037 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:24:55.138152  864037 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:24:55.138155  864037 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:24:55.138172  864037 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:24:55.138177  864037 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:24:55.138181  864037 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:24:55.138185  864037 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:24:55.138202  864037 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:24:55.138207  864037 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:24:55.138209  864037 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:24:55.138212  864037 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:24:55.138215  864037 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:24:55.138217  864037 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:24:55.138224  864037 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:24:55.138230  864037 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:24:55.138234  864037 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:24:55.138237  864037 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:24:55.138240  864037 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:24:55.138242  864037 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:24:55.138244  864037 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:24:55.138247  864037 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:24:55.138249  864037 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:24:55.138252  864037 cri.go:89] found id: ""
	I1026 14:24:55.138292  864037 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:24:55.153525  864037 out.go:203] 
	W1026 14:24:55.154931  864037 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:24:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:24:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:24:55.154953  864037 out.go:285] * 
	* 
	W1026 14:24:55.159751  864037 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:24:55.161355  864037 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable ingress --alsologtostderr -v=1: exit status 11 (260.403406ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:24:55.226396  864098 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:24:55.226502  864098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:24:55.226511  864098 out.go:374] Setting ErrFile to fd 2...
	I1026 14:24:55.226514  864098 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:24:55.226723  864098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:24:55.226998  864098 mustload.go:65] Loading cluster: addons-459729
	I1026 14:24:55.227369  864098 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:24:55.227387  864098 addons.go:606] checking whether the cluster is paused
	I1026 14:24:55.227469  864098 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:24:55.227488  864098 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:24:55.227850  864098 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:24:55.246408  864098 ssh_runner.go:195] Run: systemctl --version
	I1026 14:24:55.246488  864098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:24:55.264748  864098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:24:55.365084  864098 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:24:55.365157  864098 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:24:55.396050  864098 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:24:55.396083  864098 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:24:55.396089  864098 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:24:55.396092  864098 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:24:55.396095  864098 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:24:55.396098  864098 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:24:55.396101  864098 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:24:55.396104  864098 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:24:55.396106  864098 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:24:55.396111  864098 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:24:55.396114  864098 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:24:55.396117  864098 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:24:55.396119  864098 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:24:55.396123  864098 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:24:55.396125  864098 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:24:55.396140  864098 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:24:55.396150  864098 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:24:55.396157  864098 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:24:55.396182  864098 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:24:55.396187  864098 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:24:55.396194  864098 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:24:55.396196  864098 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:24:55.396199  864098 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:24:55.396201  864098 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:24:55.396204  864098 cri.go:89] found id: ""
	I1026 14:24:55.396246  864098 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:24:55.414239  864098 out.go:203] 
	W1026 14:24:55.415559  864098 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:24:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:24:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:24:55.415591  864098 out.go:285] * 
	* 
	W1026 14:24:55.420319  864098 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:24:55.421820  864098 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (483.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-kzxfz" [f2b6672a-cf9a-4701-bd89-dc31949ee567] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0036603s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (269.655845ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:52.448351  856800 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:52.448622  856800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:52.448635  856800 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:52.448638  856800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:52.448813  856800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:52.449097  856800 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:52.449477  856800 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:52.449497  856800 addons.go:606] checking whether the cluster is paused
	I1026 14:16:52.449583  856800 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:52.449595  856800 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:52.450010  856800 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:52.468978  856800 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:52.469040  856800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:52.488276  856800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:52.590077  856800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:52.590206  856800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:52.628455  856800 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:52.628493  856800 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:52.628500  856800 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:52.628504  856800 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:52.628508  856800 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:52.628514  856800 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:52.628518  856800 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:52.628522  856800 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:52.628526  856800 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:52.628540  856800 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:52.628550  856800 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:52.628554  856800 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:52.628558  856800 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:52.628562  856800 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:52.628566  856800 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:52.628595  856800 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:52.628607  856800 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:52.628611  856800 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:52.628614  856800 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:52.628616  856800 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:52.628618  856800 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:52.628621  856800 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:52.628623  856800 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:52.628625  856800 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:52.628628  856800 cri.go:89] found id: ""
	I1026 14:16:52.628695  856800 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:52.643640  856800 out.go:203] 
	W1026 14:16:52.644882  856800 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:52.644908  856800 out.go:285] * 
	* 
	W1026 14:16:52.650115  856800 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:52.651450  856800 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.99674ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003062848s
addons_test.go:463: (dbg) Run:  kubectl --context addons-459729 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (263.902602ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:46.180983  855801 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:46.181330  855801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:46.181340  855801 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:46.181347  855801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:46.181582  855801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:46.181898  855801 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:46.182321  855801 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:46.182343  855801 addons.go:606] checking whether the cluster is paused
	I1026 14:16:46.182458  855801 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:46.182482  855801 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:46.182884  855801 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:46.201145  855801 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:46.201224  855801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:46.218982  855801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:46.320221  855801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:46.320297  855801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:46.353971  855801 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:46.354007  855801 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:46.354012  855801 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:46.354015  855801 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:46.354017  855801 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:46.354022  855801 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:46.354024  855801 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:46.354027  855801 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:46.354029  855801 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:46.354040  855801 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:46.354045  855801 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:46.354049  855801 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:46.354052  855801 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:46.354056  855801 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:46.354060  855801 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:46.354076  855801 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:46.354087  855801 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:46.354091  855801 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:46.354094  855801 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:46.354096  855801 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:46.354098  855801 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:46.354101  855801 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:46.354104  855801 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:46.354107  855801 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:46.354110  855801 cri.go:89] found id: ""
	I1026 14:16:46.354173  855801 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:46.369265  855801 out.go:203] 
	W1026 14:16:46.370427  855801 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:46.370445  855801 out.go:285] * 
	* 
	W1026 14:16:46.375154  855801 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:46.376614  855801 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (369.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1026 14:16:52.659089  845095 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1026 14:16:52.662618  845095 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1026 14:16:52.662645  845095 kapi.go:107] duration metric: took 3.564315ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.576345ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-459729 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/10/26 14:16:53 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-459729 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [4bad36d2-59a2-4ff8-b30a-5b4bfd7f204f] Pending
helpers_test.go:352: "task-pv-pod" [4bad36d2-59a2-4ff8-b30a-5b4bfd7f204f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-459729 -n addons-459729
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-26 14:22:59.317114886 +0000 UTC m=+532.872512264
addons_test.go:567: (dbg) Run:  kubectl --context addons-459729 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-459729 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-459729/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:16:58 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhr62 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-vhr62:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  6m1s                default-scheduler  Successfully assigned default/task-pv-pod to addons-459729
Warning  Failed     3m34s               kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m34s               kubelet            Error: ErrImagePull
Normal   BackOff    3m33s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     3m33s               kubelet            Error: ImagePullBackOff
Normal   Pulling    3m18s (x2 over 6m)  kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-459729 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-459729 logs task-pv-pod -n default: exit status 1 (72.86406ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-459729 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-459729
helpers_test.go:243: (dbg) docker inspect addons-459729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99",
	        "Created": "2025-10-26T14:14:40.52606534Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 847075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:14:40.558709556Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/hosts",
	        "LogPath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99-json.log",
	        "Name": "/addons-459729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-459729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-459729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99",
	                "LowerDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-459729",
	                "Source": "/var/lib/docker/volumes/addons-459729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-459729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-459729",
	                "name.minikube.sigs.k8s.io": "addons-459729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27cac62847effb19906009c5979fe40bbf685a449ce5b4deb39ded6dddff8b6f",
	            "SandboxKey": "/var/run/docker/netns/27cac62847ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-459729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:b4:86:17:1e:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "35dc3def6cc813d1d5c906424df9f8355bd88f05b16bb1826e9958e3c782a1a4",
	                    "EndpointID": "3162d9d223ad2c1fef671da2ec9c0200d2ce47e2eeda4daaba75d1967d709ae6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-459729",
	                        "fc6e75fab9c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-459729 -n addons-459729
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-459729 logs -n 25: (1.168422041s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-313763                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-313763   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-008452                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-008452   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ --download-only -p download-docker-939440 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-939440 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ -p download-docker-939440                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-939440 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ --download-only -p binary-mirror-114305 --alsologtostderr --binary-mirror http://127.0.0.1:44689 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-114305   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ -p binary-mirror-114305                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-114305   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ addons  │ enable dashboard -p addons-459729                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-459729                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ start   │ -p addons-459729 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:16 UTC │
	│ addons  │ addons-459729 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-459729 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ ssh     │ addons-459729 ssh cat /opt/local-path-provisioner/pvc-618f90bd-473d-4ea6-99a0-92fd8df748d0_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │ 26 Oct 25 14:16 UTC │
	│ addons  │ addons-459729 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-459729                                                                                                                                                                                                                                                                                                                                                                                           │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │ 26 Oct 25 14:16 UTC │
	│ addons  │ addons-459729 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ ip      │ addons-459729 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │ 26 Oct 25 14:16 UTC │
	│ addons  │ addons-459729 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:17.112515  846424 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:17.112795  846424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:17.112803  846424 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:17.112807  846424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:17.112990  846424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:14:17.113534  846424 out.go:368] Setting JSON to false
	I1026 14:14:17.114463  846424 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7005,"bootTime":1761481052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:14:17.114570  846424 start.go:141] virtualization: kvm guest
	I1026 14:14:17.116382  846424 out.go:179] * [addons-459729] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:14:17.117587  846424 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:14:17.117592  846424 notify.go:220] Checking for updates...
	I1026 14:14:17.118732  846424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:17.119875  846424 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:14:17.121054  846424 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:14:17.122198  846424 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:14:17.123215  846424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:14:17.124682  846424 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:17.149310  846424 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:14:17.149487  846424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:17.207621  846424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 14:14:17.197494844 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:17.207741  846424 docker.go:318] overlay module found
	I1026 14:14:17.209500  846424 out.go:179] * Using the docker driver based on user configuration
	I1026 14:14:17.210611  846424 start.go:305] selected driver: docker
	I1026 14:14:17.210627  846424 start.go:925] validating driver "docker" against <nil>
	I1026 14:14:17.210642  846424 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:14:17.211282  846424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:17.265537  846424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 14:14:17.255623393 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:17.265767  846424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:17.266017  846424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:14:17.268242  846424 out.go:179] * Using Docker driver with root privileges
	I1026 14:14:17.269488  846424 cni.go:84] Creating CNI manager for ""
	I1026 14:14:17.269559  846424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:17.269572  846424 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 14:14:17.269643  846424 start.go:349] cluster config:
	{Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1026 14:14:17.270969  846424 out.go:179] * Starting "addons-459729" primary control-plane node in "addons-459729" cluster
	I1026 14:14:17.272134  846424 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 14:14:17.273402  846424 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 14:14:17.274551  846424 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:17.274581  846424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 14:14:17.274602  846424 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 14:14:17.274611  846424 cache.go:58] Caching tarball of preloaded images
	I1026 14:14:17.274710  846424 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 14:14:17.274721  846424 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 14:14:17.275086  846424 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/config.json ...
	I1026 14:14:17.275112  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/config.json: {Name:mk9529b624fed8d03806b178f8e915dee8aa0e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:17.292287  846424 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 14:14:17.292466  846424 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 14:14:17.292494  846424 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 14:14:17.292500  846424 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 14:14:17.292513  846424 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 14:14:17.292520  846424 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1026 14:14:29.432150  846424 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1026 14:14:29.432207  846424 cache.go:232] Successfully downloaded all kic artifacts
	I1026 14:14:29.432255  846424 start.go:360] acquireMachinesLock for addons-459729: {Name:mk6d98d5da8e9c6ee516b00ba1c75ff50ea84eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 14:14:29.432358  846424 start.go:364] duration metric: took 82.777µs to acquireMachinesLock for "addons-459729"
	I1026 14:14:29.432384  846424 start.go:93] Provisioning new machine with config: &{Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:14:29.432464  846424 start.go:125] createHost starting for "" (driver="docker")
	I1026 14:14:29.434070  846424 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 14:14:29.434326  846424 start.go:159] libmachine.API.Create for "addons-459729" (driver="docker")
	I1026 14:14:29.434382  846424 client.go:168] LocalClient.Create starting
	I1026 14:14:29.434474  846424 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 14:14:29.636359  846424 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 14:14:29.991463  846424 cli_runner.go:164] Run: docker network inspect addons-459729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 14:14:30.008472  846424 cli_runner.go:211] docker network inspect addons-459729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 14:14:30.008584  846424 network_create.go:284] running [docker network inspect addons-459729] to gather additional debugging logs...
	I1026 14:14:30.008611  846424 cli_runner.go:164] Run: docker network inspect addons-459729
	W1026 14:14:30.026519  846424 cli_runner.go:211] docker network inspect addons-459729 returned with exit code 1
	I1026 14:14:30.026548  846424 network_create.go:287] error running [docker network inspect addons-459729]: docker network inspect addons-459729: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-459729 not found
	I1026 14:14:30.026559  846424 network_create.go:289] output of [docker network inspect addons-459729]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-459729 not found
	
	** /stderr **
	I1026 14:14:30.026678  846424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:14:30.043803  846424 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021485f0}
	I1026 14:14:30.043866  846424 network_create.go:124] attempt to create docker network addons-459729 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 14:14:30.043913  846424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-459729 addons-459729
	I1026 14:14:30.100466  846424 network_create.go:108] docker network addons-459729 192.168.49.0/24 created
	I1026 14:14:30.100509  846424 kic.go:121] calculated static IP "192.168.49.2" for the "addons-459729" container
	I1026 14:14:30.100583  846424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 14:14:30.116904  846424 cli_runner.go:164] Run: docker volume create addons-459729 --label name.minikube.sigs.k8s.io=addons-459729 --label created_by.minikube.sigs.k8s.io=true
	I1026 14:14:30.135222  846424 oci.go:103] Successfully created a docker volume addons-459729
	I1026 14:14:30.135299  846424 cli_runner.go:164] Run: docker run --rm --name addons-459729-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-459729 --entrypoint /usr/bin/test -v addons-459729:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 14:14:36.146492  846424 cli_runner.go:217] Completed: docker run --rm --name addons-459729-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-459729 --entrypoint /usr/bin/test -v addons-459729:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.011135666s)
	I1026 14:14:36.146530  846424 oci.go:107] Successfully prepared a docker volume addons-459729
	I1026 14:14:36.146583  846424 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:36.146616  846424 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 14:14:36.146686  846424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-459729:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 14:14:40.450984  846424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-459729:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.304224683s)
	I1026 14:14:40.451018  846424 kic.go:203] duration metric: took 4.304399454s to extract preloaded images to volume ...
	W1026 14:14:40.451121  846424 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 14:14:40.451155  846424 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 14:14:40.451213  846424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 14:14:40.510278  846424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-459729 --name addons-459729 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-459729 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-459729 --network addons-459729 --ip 192.168.49.2 --volume addons-459729:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 14:14:40.765991  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Running}}
	I1026 14:14:40.784464  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:14:40.802012  846424 cli_runner.go:164] Run: docker exec addons-459729 stat /var/lib/dpkg/alternatives/iptables
	I1026 14:14:40.851940  846424 oci.go:144] the created container "addons-459729" has a running status.
	I1026 14:14:40.851973  846424 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa...
	I1026 14:14:40.949694  846424 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 14:14:40.978174  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:14:41.000243  846424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 14:14:41.000276  846424 kic_runner.go:114] Args: [docker exec --privileged addons-459729 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 14:14:41.043571  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:14:41.069582  846424 machine.go:93] provisionDockerMachine start ...
	I1026 14:14:41.069796  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.093554  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:41.093778  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:41.093791  846424 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 14:14:41.243331  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-459729
	
	I1026 14:14:41.243363  846424 ubuntu.go:182] provisioning hostname "addons-459729"
	I1026 14:14:41.243419  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.261776  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:41.262051  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:41.262072  846424 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-459729 && echo "addons-459729" | sudo tee /etc/hostname
	I1026 14:14:41.414391  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-459729
	
	I1026 14:14:41.414497  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.433449  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:41.433812  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:41.433851  846424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-459729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-459729/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-459729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 14:14:41.575368  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 14:14:41.575416  846424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 14:14:41.575444  846424 ubuntu.go:190] setting up certificates
	I1026 14:14:41.575464  846424 provision.go:84] configureAuth start
	I1026 14:14:41.575530  846424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-459729
	I1026 14:14:41.593069  846424 provision.go:143] copyHostCerts
	I1026 14:14:41.593211  846424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 14:14:41.593370  846424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 14:14:41.593473  846424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 14:14:41.593572  846424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.addons-459729 san=[127.0.0.1 192.168.49.2 addons-459729 localhost minikube]
	I1026 14:14:41.952749  846424 provision.go:177] copyRemoteCerts
	I1026 14:14:41.952809  846424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 14:14:41.952864  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.971059  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.071814  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 14:14:42.091550  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 14:14:42.109573  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 14:14:42.127661  846424 provision.go:87] duration metric: took 552.178827ms to configureAuth
	I1026 14:14:42.127694  846424 ubuntu.go:206] setting minikube options for container-runtime
	I1026 14:14:42.127910  846424 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:14:42.128035  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.145755  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:42.145991  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:42.146015  846424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 14:14:42.398484  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 14:14:42.398510  846424 machine.go:96] duration metric: took 1.328895029s to provisionDockerMachine
	I1026 14:14:42.398521  846424 client.go:171] duration metric: took 12.964130689s to LocalClient.Create
	I1026 14:14:42.398541  846424 start.go:167] duration metric: took 12.964216103s to libmachine.API.Create "addons-459729"
	I1026 14:14:42.398551  846424 start.go:293] postStartSetup for "addons-459729" (driver="docker")
	I1026 14:14:42.398565  846424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 14:14:42.398618  846424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 14:14:42.398665  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.416371  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.518463  846424 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 14:14:42.521931  846424 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 14:14:42.521963  846424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 14:14:42.521977  846424 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 14:14:42.522046  846424 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 14:14:42.522073  846424 start.go:296] duration metric: took 123.514687ms for postStartSetup
	I1026 14:14:42.522380  846424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-459729
	I1026 14:14:42.540283  846424 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/config.json ...
	I1026 14:14:42.540575  846424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:14:42.540629  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.558249  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.655957  846424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 14:14:42.660462  846424 start.go:128] duration metric: took 13.22797972s to createHost
	I1026 14:14:42.660486  846424 start.go:83] releasing machines lock for "addons-459729", held for 13.228116528s
	I1026 14:14:42.660551  846424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-459729
	I1026 14:14:42.677972  846424 ssh_runner.go:195] Run: cat /version.json
	I1026 14:14:42.678042  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.678103  846424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 14:14:42.678186  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.696981  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.697266  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.856351  846424 ssh_runner.go:195] Run: systemctl --version
	I1026 14:14:42.863288  846424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 14:14:42.900301  846424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 14:14:42.905120  846424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 14:14:42.905196  846424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 14:14:42.932600  846424 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 14:14:42.932623  846424 start.go:495] detecting cgroup driver to use...
	I1026 14:14:42.932656  846424 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 14:14:42.932705  846424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 14:14:42.948987  846424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 14:14:42.961218  846424 docker.go:218] disabling cri-docker service (if available) ...
	I1026 14:14:42.961271  846424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 14:14:42.977976  846424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 14:14:42.995853  846424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 14:14:43.078675  846424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 14:14:43.167078  846424 docker.go:234] disabling docker service ...
	I1026 14:14:43.167150  846424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 14:14:43.186433  846424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 14:14:43.199219  846424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 14:14:43.281310  846424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 14:14:43.363611  846424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 14:14:43.376627  846424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 14:14:43.391082  846424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 14:14:43.391147  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.401654  846424 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 14:14:43.401722  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.411314  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.420752  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.430053  846424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 14:14:43.438422  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.447584  846424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.462065  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.471427  846424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 14:14:43.478920  846424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 14:14:43.486416  846424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:14:43.566863  846424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 14:14:43.671842  846424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 14:14:43.671918  846424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 14:14:43.675998  846424 start.go:563] Will wait 60s for crictl version
	I1026 14:14:43.676061  846424 ssh_runner.go:195] Run: which crictl
	I1026 14:14:43.679709  846424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 14:14:43.706317  846424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 14:14:43.706420  846424 ssh_runner.go:195] Run: crio --version
	I1026 14:14:43.734316  846424 ssh_runner.go:195] Run: crio --version
	I1026 14:14:43.764384  846424 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 14:14:43.765785  846424 cli_runner.go:164] Run: docker network inspect addons-459729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:14:43.783001  846424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 14:14:43.787207  846424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:14:43.797548  846424 kubeadm.go:883] updating cluster {Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 14:14:43.797721  846424 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:43.797793  846424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:14:43.832123  846424 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:14:43.832145  846424 crio.go:433] Images already preloaded, skipping extraction
	I1026 14:14:43.832214  846424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:14:43.858842  846424 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:14:43.858871  846424 cache_images.go:85] Images are preloaded, skipping loading
	I1026 14:14:43.858883  846424 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 14:14:43.859030  846424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-459729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 14:14:43.859110  846424 ssh_runner.go:195] Run: crio config
	I1026 14:14:43.904710  846424 cni.go:84] Creating CNI manager for ""
	I1026 14:14:43.904736  846424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:43.904762  846424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 14:14:43.904789  846424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-459729 NodeName:addons-459729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 14:14:43.904928  846424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-459729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 14:14:43.904991  846424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 14:14:43.913572  846424 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 14:14:43.913638  846424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 14:14:43.921876  846424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 14:14:43.934931  846424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 14:14:43.950730  846424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1026 14:14:43.963901  846424 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 14:14:43.967671  846424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:14:43.977851  846424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:14:44.058772  846424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:14:44.083941  846424 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729 for IP: 192.168.49.2
	I1026 14:14:44.083989  846424 certs.go:195] generating shared ca certs ...
	I1026 14:14:44.084018  846424 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:44.084226  846424 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 14:14:44.387912  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt ...
	I1026 14:14:44.387946  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt: {Name:mk8933e3107ac3223c09abfcc2b23b2a267f80dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:44.388133  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key ...
	I1026 14:14:44.388149  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key: {Name:mk6b1973d9c275e0f32b5e6221cf09f2bcd1d12d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:44.388250  846424 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 14:14:45.246605  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt ...
	I1026 14:14:45.246640  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt: {Name:mkdb300b113fc66de4a4109eb2097856fa215e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.246821  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key ...
	I1026 14:14:45.246832  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key: {Name:mkaba3ad2bc7a1a50d30bd9bfd3aea7c19e5fda9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.246922  846424 certs.go:257] generating profile certs ...
	I1026 14:14:45.247013  846424 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.key
	I1026 14:14:45.247033  846424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt with IP's: []
	I1026 14:14:45.334595  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt ...
	I1026 14:14:45.334626  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: {Name:mkafadf8981207eceb9ebbe4962ff018f519fecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.334804  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.key ...
	I1026 14:14:45.334815  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.key: {Name:mka2fbae2418418d747b82adac0fb2b7f375ffa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.334888  846424 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1
	I1026 14:14:45.334908  846424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1026 14:14:45.666093  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1 ...
	I1026 14:14:45.666125  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1: {Name:mkb948c94234f3b4bc97a7b01df3ae78190037f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.666319  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1 ...
	I1026 14:14:45.666337  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1: {Name:mk3bf95757956aa10cef36d1b4e59b884575ea91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.666413  846424 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt
	I1026 14:14:45.666512  846424 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key
	I1026 14:14:45.666569  846424 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key
	I1026 14:14:45.666596  846424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt with IP's: []
	I1026 14:14:45.921156  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt ...
	I1026 14:14:45.921205  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt: {Name:mkbc119a7d5f48960c3f21d5f4d887a967005987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.921387  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key ...
	I1026 14:14:45.921401  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key: {Name:mk005d3953795c30c971b42e066689f23e94bbc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.921650  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 14:14:45.921691  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 14:14:45.921717  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 14:14:45.921738  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 14:14:45.922419  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 14:14:45.941068  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 14:14:45.958551  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 14:14:45.976346  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 14:14:45.994052  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 14:14:46.011477  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 14:14:46.028955  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 14:14:46.046187  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 14:14:46.063408  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 14:14:46.082572  846424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 14:14:46.095043  846424 ssh_runner.go:195] Run: openssl version
	I1026 14:14:46.101206  846424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 14:14:46.112299  846424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:14:46.116268  846424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:14:46.116319  846424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:14:46.152435  846424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 14:14:46.161706  846424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 14:14:46.165576  846424 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 14:14:46.165626  846424 kubeadm.go:400] StartCluster: {Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:14:46.165713  846424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:14:46.165765  846424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:14:46.194501  846424 cri.go:89] found id: ""
	I1026 14:14:46.194576  846424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 14:14:46.202715  846424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 14:14:46.211023  846424 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 14:14:46.211084  846424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 14:14:46.219223  846424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 14:14:46.219242  846424 kubeadm.go:157] found existing configuration files:
	
	I1026 14:14:46.219304  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 14:14:46.227401  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 14:14:46.227464  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 14:14:46.234983  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 14:14:46.242551  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 14:14:46.242605  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 14:14:46.249969  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 14:14:46.257567  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 14:14:46.257615  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 14:14:46.265426  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 14:14:46.273171  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 14:14:46.273236  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 14:14:46.280562  846424 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 14:14:46.343303  846424 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 14:14:46.403244  846424 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 14:14:56.860323  846424 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 14:14:56.860407  846424 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 14:14:56.860530  846424 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 14:14:56.860618  846424 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 14:14:56.860662  846424 kubeadm.go:318] OS: Linux
	I1026 14:14:56.860706  846424 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 14:14:56.860748  846424 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 14:14:56.860797  846424 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 14:14:56.860866  846424 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 14:14:56.860933  846424 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 14:14:56.861010  846424 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 14:14:56.861057  846424 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 14:14:56.861095  846424 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 14:14:56.861201  846424 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 14:14:56.861325  846424 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 14:14:56.861408  846424 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 14:14:56.861499  846424 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 14:14:56.863767  846424 out.go:252]   - Generating certificates and keys ...
	I1026 14:14:56.863843  846424 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 14:14:56.863905  846424 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 14:14:56.863967  846424 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 14:14:56.864073  846424 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 14:14:56.864145  846424 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 14:14:56.864216  846424 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 14:14:56.864284  846424 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 14:14:56.864408  846424 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-459729 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:14:56.864455  846424 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 14:14:56.864552  846424 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-459729 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:14:56.864612  846424 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 14:14:56.864666  846424 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 14:14:56.864721  846424 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 14:14:56.864809  846424 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 14:14:56.864880  846424 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 14:14:56.864955  846424 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 14:14:56.865011  846424 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 14:14:56.865071  846424 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 14:14:56.865154  846424 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 14:14:56.865256  846424 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 14:14:56.865342  846424 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 14:14:56.866657  846424 out.go:252]   - Booting up control plane ...
	I1026 14:14:56.866747  846424 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 14:14:56.866847  846424 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 14:14:56.866934  846424 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 14:14:56.867095  846424 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 14:14:56.867202  846424 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 14:14:56.867333  846424 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 14:14:56.867446  846424 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 14:14:56.867518  846424 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 14:14:56.867705  846424 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 14:14:56.867847  846424 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 14:14:56.867935  846424 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001058368s
	I1026 14:14:56.868063  846424 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 14:14:56.868199  846424 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1026 14:14:56.868310  846424 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 14:14:56.868408  846424 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 14:14:56.868533  846424 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.557698122s
	I1026 14:14:56.868636  846424 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.268793474s
	I1026 14:14:56.868740  846424 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001941586s
	I1026 14:14:56.868848  846424 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 14:14:56.868985  846424 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 14:14:56.869074  846424 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 14:14:56.869319  846424 kubeadm.go:318] [mark-control-plane] Marking the node addons-459729 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 14:14:56.869423  846424 kubeadm.go:318] [bootstrap-token] Using token: f6fn21.ali5nckn8rkh7x29
	I1026 14:14:56.871880  846424 out.go:252]   - Configuring RBAC rules ...
	I1026 14:14:56.871970  846424 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 14:14:56.872081  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 14:14:56.872291  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 14:14:56.872503  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 14:14:56.872682  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 14:14:56.872826  846424 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 14:14:56.872987  846424 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 14:14:56.873058  846424 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 14:14:56.873120  846424 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 14:14:56.873133  846424 kubeadm.go:318] 
	I1026 14:14:56.873228  846424 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 14:14:56.873240  846424 kubeadm.go:318] 
	I1026 14:14:56.873354  846424 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 14:14:56.873363  846424 kubeadm.go:318] 
	I1026 14:14:56.873405  846424 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 14:14:56.873458  846424 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 14:14:56.873503  846424 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 14:14:56.873509  846424 kubeadm.go:318] 
	I1026 14:14:56.873555  846424 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 14:14:56.873560  846424 kubeadm.go:318] 
	I1026 14:14:56.873597  846424 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 14:14:56.873603  846424 kubeadm.go:318] 
	I1026 14:14:56.873643  846424 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 14:14:56.873707  846424 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 14:14:56.873765  846424 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 14:14:56.873770  846424 kubeadm.go:318] 
	I1026 14:14:56.873885  846424 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 14:14:56.873950  846424 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 14:14:56.873955  846424 kubeadm.go:318] 
	I1026 14:14:56.874020  846424 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token f6fn21.ali5nckn8rkh7x29 \
	I1026 14:14:56.874104  846424 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 14:14:56.874125  846424 kubeadm.go:318] 	--control-plane 
	I1026 14:14:56.874131  846424 kubeadm.go:318] 
	I1026 14:14:56.874231  846424 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 14:14:56.874245  846424 kubeadm.go:318] 
	I1026 14:14:56.874359  846424 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token f6fn21.ali5nckn8rkh7x29 \
	I1026 14:14:56.874513  846424 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 14:14:56.874526  846424 cni.go:84] Creating CNI manager for ""
	I1026 14:14:56.874533  846424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:56.876103  846424 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 14:14:56.877647  846424 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 14:14:56.882227  846424 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 14:14:56.882247  846424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 14:14:56.895793  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 14:14:57.106713  846424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 14:14:57.106824  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:57.106854  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-459729 minikube.k8s.io/updated_at=2025_10_26T14_14_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=addons-459729 minikube.k8s.io/primary=true
	I1026 14:14:57.117887  846424 ops.go:34] apiserver oom_adj: -16
	I1026 14:14:57.187931  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:57.688917  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:58.188959  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:58.688895  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:59.188658  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:59.688052  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:00.188849  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:00.687985  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:01.188637  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:01.688698  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:01.754745  846424 kubeadm.go:1113] duration metric: took 4.647991318s to wait for elevateKubeSystemPrivileges
	I1026 14:15:01.754787  846424 kubeadm.go:402] duration metric: took 15.58916607s to StartCluster
	I1026 14:15:01.754836  846424 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:01.754978  846424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:15:01.755482  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:01.755722  846424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 14:15:01.755738  846424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:15:01.755806  846424 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 14:15:01.755939  846424 addons.go:69] Setting yakd=true in profile "addons-459729"
	I1026 14:15:01.755964  846424 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-459729"
	I1026 14:15:01.755989  846424 addons.go:69] Setting registry=true in profile "addons-459729"
	I1026 14:15:01.756000  846424 addons.go:69] Setting inspektor-gadget=true in profile "addons-459729"
	I1026 14:15:01.756006  846424 addons.go:238] Setting addon registry=true in "addons-459729"
	I1026 14:15:01.756016  846424 addons.go:238] Setting addon inspektor-gadget=true in "addons-459729"
	I1026 14:15:01.756040  846424 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:01.756049  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756052  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756034  846424 addons.go:69] Setting ingress=true in profile "addons-459729"
	I1026 14:15:01.756078  846424 addons.go:238] Setting addon ingress=true in "addons-459729"
	I1026 14:15:01.756055  846424 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-459729"
	I1026 14:15:01.756096  846424 addons.go:69] Setting registry-creds=true in profile "addons-459729"
	I1026 14:15:01.756104  846424 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-459729"
	I1026 14:15:01.756113  846424 addons.go:69] Setting default-storageclass=true in profile "addons-459729"
	I1026 14:15:01.756115  846424 addons.go:69] Setting storage-provisioner=true in profile "addons-459729"
	I1026 14:15:01.756130  846424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-459729"
	I1026 14:15:01.756140  846424 addons.go:238] Setting addon storage-provisioner=true in "addons-459729"
	I1026 14:15:01.756147  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756152  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.755989  846424 addons.go:69] Setting ingress-dns=true in profile "addons-459729"
	I1026 14:15:01.756657  846424 addons.go:238] Setting addon ingress-dns=true in "addons-459729"
	I1026 14:15:01.756719  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756106  846424 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-459729"
	I1026 14:15:01.756889  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756910  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757017  846424 addons.go:69] Setting metrics-server=true in profile "addons-459729"
	I1026 14:15:01.757045  846424 addons.go:238] Setting addon metrics-server=true in "addons-459729"
	I1026 14:15:01.757082  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757139  846424 addons.go:69] Setting volcano=true in profile "addons-459729"
	I1026 14:15:01.757156  846424 addons.go:238] Setting addon volcano=true in "addons-459729"
	I1026 14:15:01.757203  846424 addons.go:69] Setting gcp-auth=true in profile "addons-459729"
	I1026 14:15:01.757206  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757222  846424 mustload.go:65] Loading cluster: addons-459729
	I1026 14:15:01.757433  846424 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:01.757547  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.757549  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.757698  846424 addons.go:69] Setting volumesnapshots=true in profile "addons-459729"
	I1026 14:15:01.757716  846424 addons.go:238] Setting addon volumesnapshots=true in "addons-459729"
	I1026 14:15:01.757734  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.757739  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757840  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.758388  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.759608  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.755980  846424 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-459729"
	I1026 14:15:01.760402  846424 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-459729"
	I1026 14:15:01.760438  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756108  846424 addons.go:238] Setting addon registry-creds=true in "addons-459729"
	I1026 14:15:01.760907  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.761372  846424 out.go:179] * Verifying Kubernetes components...
	I1026 14:15:01.761867  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.761926  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.762184  846424 addons.go:69] Setting cloud-spanner=true in profile "addons-459729"
	I1026 14:15:01.762211  846424 addons.go:238] Setting addon cloud-spanner=true in "addons-459729"
	I1026 14:15:01.762241  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.755981  846424 addons.go:238] Setting addon yakd=true in "addons-459729"
	I1026 14:15:01.762447  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756085  846424 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-459729"
	I1026 14:15:01.762581  846424 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-459729"
	I1026 14:15:01.763509  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763699  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763743  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763750  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763779  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.764110  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.764900  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.765241  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.765248  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.768115  846424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:01.824394  846424 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 14:15:01.826325  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 14:15:01.826360  846424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 14:15:01.826434  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.827641  846424 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-459729"
	I1026 14:15:01.827777  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.828346  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	W1026 14:15:01.835680  846424 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 14:15:01.838243  846424 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 14:15:01.838670  846424 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1026 14:15:01.838918  846424 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 14:15:01.839837  846424 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 14:15:01.840040  846424 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 14:15:01.840135  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.840788  846424 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:01.840810  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 14:15:01.840875  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.841970  846424 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:01.843548  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 14:15:01.842268  846424 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 14:15:01.843080  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 14:15:01.843369  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.845123  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.846927  846424 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:01.846947  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 14:15:01.847004  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.856949  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 14:15:01.856978  846424 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 14:15:01.857056  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.860776  846424 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 14:15:01.867308  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 14:15:01.867357  846424 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 14:15:01.868311  846424 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:01.868329  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 14:15:01.868399  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.868677  846424 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 14:15:01.871040  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 14:15:01.871111  846424 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 14:15:01.872855  846424 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 14:15:01.872878  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 14:15:01.872949  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.873125  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 14:15:01.873516  846424 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:01.873535  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 14:15:01.873835  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.877387  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 14:15:01.879560  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 14:15:01.882349  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 14:15:01.883556  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 14:15:01.892467  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 14:15:01.893569  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 14:15:01.893595  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 14:15:01.893667  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.905909  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.906029  846424 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 14:15:01.908203  846424 out.go:179]   - Using image docker.io/busybox:stable
	I1026 14:15:01.912234  846424 addons.go:238] Setting addon default-storageclass=true in "addons-459729"
	I1026 14:15:01.913190  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.913688  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.914316  846424 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:01.914397  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 14:15:01.914467  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.925284  846424 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 14:15:01.929356  846424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 14:15:01.930036  846424 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:01.930058  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 14:15:01.930129  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.936261  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.937883  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.938657  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:01.940626  846424 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 14:15:01.941914  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 14:15:01.942000  846424 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 14:15:01.942101  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.941961  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 14:15:01.945791  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.945864  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.948625  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:01.949928  846424 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:01.949982  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 14:15:01.950059  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.955204  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.970925  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.975351  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.976053  846424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:15:01.978630  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.991435  846424 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:01.991462  846424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 14:15:01.991528  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.991780  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.016851  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	W1026 14:15:02.019263  846424 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 14:15:02.019305  846424 retry.go:31] will retry after 217.923962ms: ssh: handshake failed: EOF
	I1026 14:15:02.023195  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.032276  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.035819  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.039781  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.125207  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:02.136547  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:02.141012  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 14:15:02.141040  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 14:15:02.149864  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:02.150107  846424 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 14:15:02.150133  846424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 14:15:02.153611  846424 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 14:15:02.153638  846424 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 14:15:02.155650  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 14:15:02.155673  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 14:15:02.157330  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:02.160138  846424 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:02.160154  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 14:15:02.168525  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 14:15:02.168554  846424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 14:15:02.188885  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:02.190931  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 14:15:02.190953  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 14:15:02.191058  846424 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 14:15:02.191119  846424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 14:15:02.195824  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 14:15:02.195847  846424 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 14:15:02.196657  846424 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:02.196677  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 14:15:02.197637  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:02.200528  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:02.207552  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:02.207579  846424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 14:15:02.232493  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:02.235058  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 14:15:02.235104  846424 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 14:15:02.247417  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:02.247703  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 14:15:02.247731  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 14:15:02.254701  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:02.261459  846424 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 14:15:02.261489  846424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 14:15:02.297299  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 14:15:02.297343  846424 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 14:15:02.298507  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:02.314881  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 14:15:02.314916  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 14:15:02.328700  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 14:15:02.328736  846424 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 14:15:02.358580  846424 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 14:15:02.359543  846424 node_ready.go:35] waiting up to 6m0s for node "addons-459729" to be "Ready" ...
	I1026 14:15:02.371344  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:02.371372  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 14:15:02.404369  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 14:15:02.404399  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 14:15:02.424439  846424 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:02.424528  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 14:15:02.442236  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:02.460571  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 14:15:02.460657  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 14:15:02.502911  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:02.534388  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 14:15:02.534419  846424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 14:15:02.545901  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:02.614728  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 14:15:02.614838  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 14:15:02.667295  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 14:15:02.667523  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 14:15:02.707698  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:15:02.707786  846424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 14:15:02.747588  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:15:02.873331  846424 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-459729" context rescaled to 1 replicas
	I1026 14:15:03.502753  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.302179853s)
	I1026 14:15:03.502793  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.270259468s)
	I1026 14:15:03.502801  846424 addons.go:479] Verifying addon ingress=true in "addons-459729"
	I1026 14:15:03.503063  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255611006s)
	I1026 14:15:03.503098  846424 addons.go:479] Verifying addon metrics-server=true in "addons-459729"
	I1026 14:15:03.503181  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.248430064s)
	W1026 14:15:03.503268  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:03.503289  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.204745558s)
	I1026 14:15:03.503322  846424 addons.go:479] Verifying addon registry=true in "addons-459729"
	I1026 14:15:03.503380  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.061046188s)
	I1026 14:15:03.503295  846424 retry.go:31] will retry after 148.010934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:03.504599  846424 out.go:179] * Verifying registry addon...
	I1026 14:15:03.504631  846424 out.go:179] * Verifying ingress addon...
	I1026 14:15:03.507305  846424 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-459729 service yakd-dashboard -n yakd-dashboard
	
	I1026 14:15:03.508086  846424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 14:15:03.508142  846424 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 14:15:03.511447  846424 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:15:03.511469  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:03.511568  846424 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 14:15:03.511589  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:03.651987  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:03.931773  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.428808877s)
	W1026 14:15:03.931834  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:03.931861  846424 retry.go:31] will retry after 202.223495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:03.931929  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.386004332s)
	I1026 14:15:03.932280  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.184640366s)
	I1026 14:15:03.932321  846424 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-459729"
	I1026 14:15:03.934515  846424 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 14:15:03.936685  846424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 14:15:03.939543  846424 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:15:03.939568  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:04.011803  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:04.012023  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:04.135249  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1026 14:15:04.302639  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:04.302684  846424 retry.go:31] will retry after 256.294826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 14:15:04.362538  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:04.440917  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:04.541665  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:04.541710  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:04.559817  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:04.939696  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:05.011299  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:05.011458  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:05.440447  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:05.541188  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:05.541273  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:05.940840  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:06.011969  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:06.012243  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:06.362969  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:06.440395  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:06.540977  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:06.541042  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:06.641882  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.506580998s)
	I1026 14:15:06.641952  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.082094284s)
	W1026 14:15:06.641987  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:06.642010  846424 retry.go:31] will retry after 346.725146ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:06.940606  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:06.989704  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:07.011088  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:07.011280  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:07.440961  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:07.542090  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:07.542360  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:07.558417  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:07.558457  846424 retry.go:31] will retry after 465.781456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:07.940090  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:08.011851  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:08.011921  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:08.025028  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:15:08.363131  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:08.439805  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:08.511865  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:08.512205  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:08.582561  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:08.582599  846424 retry.go:31] will retry after 1.449023391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:08.940711  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:09.011541  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:09.011689  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:09.440927  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:09.454842  846424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 14:15:09.454915  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:09.474050  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:09.542099  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:09.542269  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:09.586209  846424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 14:15:09.599936  846424 addons.go:238] Setting addon gcp-auth=true in "addons-459729"
	I1026 14:15:09.600004  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:09.600518  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:09.618865  846424 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 14:15:09.618925  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:09.637719  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:09.738033  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:09.739603  846424 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 14:15:09.741100  846424 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 14:15:09.741126  846424 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 14:15:09.755471  846424 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 14:15:09.755502  846424 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 14:15:09.769570  846424 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:09.769600  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 14:15:09.783135  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:09.940438  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:10.011447  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:10.011724  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:10.032590  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:10.107618  846424 addons.go:479] Verifying addon gcp-auth=true in "addons-459729"
	I1026 14:15:10.109476  846424 out.go:179] * Verifying gcp-auth addon...
	I1026 14:15:10.112303  846424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 14:15:10.115588  846424 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 14:15:10.115614  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:10.441825  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:10.511906  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:10.511972  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:10.611392  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:10.611426  846424 retry.go:31] will retry after 1.80430156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:10.614915  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:10.862859  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:10.939690  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:11.011633  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:11.011841  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:11.116133  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:11.440853  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:11.511600  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:11.511833  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:11.615829  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:11.940803  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:12.011795  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:12.012045  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:12.115725  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:12.416588  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:12.440181  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:12.511462  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:12.511639  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:12.615801  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:12.940325  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:15:12.964755  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:12.964784  846424 retry.go:31] will retry after 1.780244556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:13.011987  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:13.012113  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:13.116258  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:13.363321  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:13.440372  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:13.511266  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:13.511405  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:13.615430  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:13.940076  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:14.012062  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:14.012116  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:14.116253  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:14.440242  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:14.512057  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:14.512338  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:14.615992  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:14.746241  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:14.940674  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:15.011505  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:15.011640  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:15.116328  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:15.316951  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:15.316989  846424 retry.go:31] will retry after 5.440492782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:15.440200  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:15.511134  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:15.511275  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:15.616267  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:15.862887  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:15.939913  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:16.011983  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:16.012134  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:16.116436  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:16.440198  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:16.512498  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:16.512684  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:16.615786  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:16.940627  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:17.011646  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:17.011893  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:17.116034  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:17.440400  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:17.511242  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:17.511408  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:17.616515  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:17.940364  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:18.011130  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:18.011253  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:18.116015  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:18.363065  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:18.440278  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:18.512057  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:18.512257  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:18.616302  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:18.940378  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:19.011296  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:19.011355  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:19.116473  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:19.440955  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:19.511663  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:19.511896  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:19.616320  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:19.940560  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:20.011520  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:20.011797  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:20.115557  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:20.440901  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:20.511988  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:20.512031  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:20.615783  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:20.758096  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:15:20.862647  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:20.940915  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:21.012207  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:21.012289  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:21.117067  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:21.313675  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:21.313707  846424 retry.go:31] will retry after 8.91122247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:21.440656  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:21.511553  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:21.511689  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:21.615625  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:21.940584  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:22.011440  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:22.011654  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:22.115655  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:22.440488  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:22.511406  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:22.511550  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:22.615671  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:22.940358  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:23.011074  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:23.011174  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:23.116377  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:23.363318  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:23.440377  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:23.511345  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:23.511560  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:23.615384  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:23.940379  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:24.011307  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:24.011561  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:24.116091  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:24.440587  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:24.511418  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:24.511646  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:24.615811  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:24.939855  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:25.011683  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:25.011782  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:25.116357  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:25.440984  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:25.511873  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:25.511903  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:25.615664  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:25.862365  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:25.940295  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:26.011232  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:26.011407  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:26.115402  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:26.440446  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:26.511527  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:26.511752  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:26.615540  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:26.940929  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:27.042156  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:27.042322  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:27.142622  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:27.440313  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:27.511616  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:27.511736  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:27.615910  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:27.863296  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:27.940352  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:28.011545  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:28.011563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:28.115289  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:28.440439  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:28.511542  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:28.511612  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:28.615532  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:28.940483  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:29.011417  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:29.011572  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:29.115862  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:29.440528  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:29.511762  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:29.511961  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:29.615472  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:29.863626  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:29.940511  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:30.011347  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:30.011526  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:30.115553  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:30.225751  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:30.440732  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:30.511761  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:30.511809  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:30.615389  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:30.801581  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:30.801612  846424 retry.go:31] will retry after 13.384924225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:30.940459  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:31.011507  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:31.011625  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:31.115351  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:31.440233  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:31.510980  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:31.511100  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:31.616243  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:31.940463  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:32.011513  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:32.011678  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:32.115622  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:32.362628  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:32.440664  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:32.511680  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:32.511737  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:32.615569  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:32.940541  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:33.011546  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:33.011664  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:33.115806  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:33.440064  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:33.512047  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:33.512126  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:33.615997  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:33.939890  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:34.012203  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:34.012266  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:34.116285  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:34.362894  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:34.439794  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:34.511885  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:34.511888  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:34.615635  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:34.940637  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:35.011802  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:35.012036  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:35.116073  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:35.441237  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:35.511039  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:35.511313  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:35.616525  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:35.940536  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:36.011591  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:36.011913  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:36.115551  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:36.440489  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:36.511389  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:36.511596  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:36.615558  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:36.862289  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:36.940212  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:37.011145  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:37.011314  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:37.116406  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:37.440773  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:37.511657  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:37.511746  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:37.615698  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:37.940654  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:38.011611  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:38.011630  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:38.115442  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:38.440259  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:38.511046  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:38.511100  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:38.616066  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:38.863142  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:38.940023  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:39.012065  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:39.012130  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:39.116011  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:39.439627  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:39.511481  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:39.511553  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:39.615371  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:39.940298  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:40.011243  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:40.011419  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:40.115389  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:40.440352  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:40.511019  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:40.511307  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:40.616092  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:40.939667  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:41.011746  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:41.011778  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:41.115465  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:41.363466  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:41.440572  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:41.511456  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:41.511511  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:41.615524  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:41.940641  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:42.011604  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:42.011718  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:42.115753  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:42.440553  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:42.511790  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:42.512024  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:42.615996  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:42.940386  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:43.011106  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:43.011220  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:43.116069  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:43.363260  846424 node_ready.go:49] node "addons-459729" is "Ready"
	I1026 14:15:43.363297  846424 node_ready.go:38] duration metric: took 41.003701767s for node "addons-459729" to be "Ready" ...
	I1026 14:15:43.363317  846424 api_server.go:52] waiting for apiserver process to appear ...
	I1026 14:15:43.363400  846424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:15:43.381711  846424 api_server.go:72] duration metric: took 41.62593283s to wait for apiserver process to appear ...
	I1026 14:15:43.381745  846424 api_server.go:88] waiting for apiserver healthz status ...
	I1026 14:15:43.381771  846424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 14:15:43.386270  846424 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 14:15:43.387310  846424 api_server.go:141] control plane version: v1.34.1
	I1026 14:15:43.387346  846424 api_server.go:131] duration metric: took 5.591629ms to wait for apiserver health ...
	I1026 14:15:43.387357  846424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 14:15:43.390642  846424 system_pods.go:59] 20 kube-system pods found
	I1026 14:15:43.390691  846424 system_pods.go:61] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:43.390702  846424 system_pods.go:61] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:43.390711  846424 system_pods.go:61] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending
	I1026 14:15:43.390716  846424 system_pods.go:61] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending
	I1026 14:15:43.390720  846424 system_pods.go:61] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending
	I1026 14:15:43.390723  846424 system_pods.go:61] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:43.390726  846424 system_pods.go:61] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:43.390729  846424 system_pods.go:61] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:43.390732  846424 system_pods.go:61] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:43.390742  846424 system_pods.go:61] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending
	I1026 14:15:43.390745  846424 system_pods.go:61] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:43.390751  846424 system_pods.go:61] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:43.390756  846424 system_pods.go:61] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:43.390762  846424 system_pods.go:61] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending
	I1026 14:15:43.390784  846424 system_pods.go:61] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:43.390790  846424 system_pods.go:61] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:43.390795  846424 system_pods.go:61] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending
	I1026 14:15:43.390799  846424 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending
	I1026 14:15:43.390802  846424 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending
	I1026 14:15:43.390807  846424 system_pods.go:61] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:43.390818  846424 system_pods.go:74] duration metric: took 3.45377ms to wait for pod list to return data ...
	I1026 14:15:43.390829  846424 default_sa.go:34] waiting for default service account to be created ...
	I1026 14:15:43.394537  846424 default_sa.go:45] found service account: "default"
	I1026 14:15:43.394566  846424 default_sa.go:55] duration metric: took 3.728908ms for default service account to be created ...
	I1026 14:15:43.394579  846424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 14:15:43.398295  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:43.398331  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:43.398340  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:43.398348  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending
	I1026 14:15:43.398354  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending
	I1026 14:15:43.398359  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending
	I1026 14:15:43.398364  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:43.398371  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:43.398377  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:43.398385  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:43.398396  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:43.398405  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:43.398412  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:43.398423  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:43.398432  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending
	I1026 14:15:43.398441  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:43.398452  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:43.398460  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending
	I1026 14:15:43.398466  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending
	I1026 14:15:43.398474  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending
	I1026 14:15:43.398481  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:43.398503  846424 retry.go:31] will retry after 285.578303ms: missing components: kube-dns
	I1026 14:15:43.439988  846424 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:15:43.440011  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:43.511891  846424 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:15:43.511923  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:43.512089  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:43.617305  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:43.720800  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:43.720851  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:43.720871  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:43.720883  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:15:43.720891  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:15:43.720909  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:15:43.720924  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:43.720930  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:43.720951  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:43.720962  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:43.720971  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:43.720984  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:43.720991  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:43.721001  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:43.721016  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:15:43.721023  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:43.721031  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:43.721042  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:15:43.721056  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:43.721066  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:43.721075  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:43.721098  846424 retry.go:31] will retry after 329.971946ms: missing components: kube-dns
	I1026 14:15:43.942121  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:44.012262  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:44.012376  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:44.056065  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:44.056108  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:44.056119  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:44.056129  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:15:44.056139  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:15:44.056147  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:15:44.056153  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:44.056171  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:44.056179  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:44.056184  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:44.056193  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:44.056202  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:44.056209  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:44.056217  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:44.056229  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:15:44.056239  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:44.056251  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:44.056261  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:15:44.056270  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.056276  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.056281  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:44.056300  846424 retry.go:31] will retry after 468.560484ms: missing components: kube-dns
	I1026 14:15:44.117136  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:44.187375  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:44.441427  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:44.511423  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:44.511459  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:44.530251  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:44.530287  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:44.530295  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Running
	I1026 14:15:44.530306  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:15:44.530314  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:15:44.530323  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:15:44.530329  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:44.530334  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:44.530344  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:44.530350  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:44.530361  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:44.530366  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:44.530376  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:44.530383  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:44.530396  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:15:44.530415  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:44.530428  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:44.530438  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:15:44.530446  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.530456  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.530462  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Running
	I1026 14:15:44.530472  846424 system_pods.go:126] duration metric: took 1.135885614s to wait for k8s-apps to be running ...
	I1026 14:15:44.530482  846424 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 14:15:44.530536  846424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:15:44.616031  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:44.908118  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:44.908188  846424 retry.go:31] will retry after 19.716620035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:44.908207  846424 system_svc.go:56] duration metric: took 377.714352ms WaitForService to wait for kubelet
	I1026 14:15:44.908230  846424 kubeadm.go:586] duration metric: took 43.152458642s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:15:44.908250  846424 node_conditions.go:102] verifying NodePressure condition ...
	I1026 14:15:44.911337  846424 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 14:15:44.911364  846424 node_conditions.go:123] node cpu capacity is 8
	I1026 14:15:44.911397  846424 node_conditions.go:105] duration metric: took 3.140307ms to run NodePressure ...
	I1026 14:15:44.911413  846424 start.go:241] waiting for startup goroutines ...
	I1026 14:15:44.940945  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:45.011805  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:45.011886  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:45.116285  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:45.440843  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:45.513224  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:45.513412  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:45.616570  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:45.942361  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:46.012675  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:46.013528  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:46.117474  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:46.441947  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:46.512341  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:46.512535  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:46.616794  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:46.940470  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:47.011869  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:47.011931  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:47.116563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:47.440841  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:47.512115  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:47.512220  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:47.616287  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:47.941835  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:48.012250  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:48.012341  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:48.116362  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:48.441006  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:48.512345  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:48.512356  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:48.616602  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:48.940607  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:49.011849  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.012016  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:49.116379  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:49.440767  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:49.511952  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.511976  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:49.616460  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:49.941743  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.012219  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.012379  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.116868  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:50.441466  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.513107  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.515804  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.616823  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:50.941030  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:51.012636  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.012725  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.115494  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:51.441781  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:51.512071  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.512308  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.616593  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:51.940965  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.012409  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.012488  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.115563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:52.443333  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.511231  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.511257  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.616100  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:52.940805  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.012226  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.012332  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.116967  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:53.440270  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.511122  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.511202  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.615637  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:53.940600  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:54.011614  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.011714  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.116924  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.441210  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:54.512865  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.513048  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.617005  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.940859  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.012113  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.012224  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:55.116663  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.441153  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.512196  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.512413  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:55.616687  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.940578  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:56.011673  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.011694  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.115735  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.440993  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:56.512523  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.512651  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.615678  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.940579  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.041197  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.041223  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:57.116077  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.441079  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.541533  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.541819  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:57.615870  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.940598  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.011563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.011563  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.115144  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.441032  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.511836  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.511904  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.616428  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.941373  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.012293  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.012340  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.115780  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:59.440337  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.511639  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.511739  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.616062  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:59.940947  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:00.012642  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.012851  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.116169  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.441026  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:00.512261  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.512331  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.616120  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.941108  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.012439  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.012548  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.115880  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.440399  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.511194  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.511239  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.616484  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.940063  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:02.012035  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.012152  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.116524  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:02.440114  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:02.511694  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.511958  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.616287  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:02.940613  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.011613  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.011834  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.115994  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.440884  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.541763  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.541805  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.615582  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.939875  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.012997  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.012997  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.116050  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.440720  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.541348  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.541372  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.616022  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.625105  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:04.940631  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:05.011348  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.011465  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.116030  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:05.176887  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:05.176928  846424 retry.go:31] will retry after 26.54487401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:05.441807  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:05.511995  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.512004  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.617230  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:05.944172  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.012901  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.014375  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.116181  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.473520  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.678507  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.678878  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.678919  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.943847  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:07.014324  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.015611  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.115649  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.440832  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:07.512429  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.512442  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.616798  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.941418  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:08.042703  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.042726  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.116002  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.440870  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:08.512242  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.512318  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.617191  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.940574  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.011290  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.011501  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.116370  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.441256  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.512541  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.512777  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.615743  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.940529  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:10.017652  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.017858  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.151413  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.551948  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.552021  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.552037  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:10.615849  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.940842  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.012102  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.012263  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.116194  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.440707  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.511926  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.511992  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.616604  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.959290  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:12.081235  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.081274  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.243116  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.442118  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:12.542025  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.542035  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.615889  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.940336  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.011966  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.012041  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.116222  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.441408  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.511717  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.511795  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.616459  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.941451  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:14.011632  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:14.011677  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.115643  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.440266  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:14.512541  846424 kapi.go:107] duration metric: took 1m11.004457602s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 14:16:14.512727  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.616135  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.941020  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.011868  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.116053  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.441317  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.512641  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.616897  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.940327  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:16.012834  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.116231  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:16.554332  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:16.554383  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.702636  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:16.941451  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.042536  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.115437  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.440730  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.512822  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.615703  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.940235  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:18.011654  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.115582  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.442174  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:18.512192  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.615605  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.940832  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.012192  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.116052  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:19.445141  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.512920  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.616513  846424 kapi.go:107] duration metric: took 1m9.504207447s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 14:16:19.618361  846424 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-459729 cluster.
	I1026 14:16:19.619952  846424 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 14:16:19.621190  846424 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 14:16:19.941420  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.012984  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.467066  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.512438  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.941556  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:21.011657  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.440326  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:21.512699  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.940391  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.013074  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.441421  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.512726  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.941511  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.011427  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:23.441692  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.541961  846424 kapi.go:107] duration metric: took 1m20.033815029s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 14:16:23.939860  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.441033  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.940666  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.441106  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.949894  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.440937  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.940765  846424 kapi.go:107] duration metric: took 1m23.004082526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 14:16:31.723355  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:16:32.268610  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 14:16:32.268728  846424 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 14:16:32.270314  846424 out.go:179] * Enabled addons: storage-provisioner, ingress-dns, amd-gpu-device-plugin, registry-creds, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1026 14:16:32.271316  846424 addons.go:514] duration metric: took 1m30.515515484s for enable addons: enabled=[storage-provisioner ingress-dns amd-gpu-device-plugin registry-creds nvidia-device-plugin cloud-spanner metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1026 14:16:32.271351  846424 start.go:246] waiting for cluster config update ...
	I1026 14:16:32.271372  846424 start.go:255] writing updated cluster config ...
	I1026 14:16:32.271625  846424 ssh_runner.go:195] Run: rm -f paused
	I1026 14:16:32.275601  846424 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:16:32.278988  846424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-58kmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.283800  846424 pod_ready.go:94] pod "coredns-66bc5c9577-58kmh" is "Ready"
	I1026 14:16:32.283832  846424 pod_ready.go:86] duration metric: took 4.822784ms for pod "coredns-66bc5c9577-58kmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.285751  846424 pod_ready.go:83] waiting for pod "etcd-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.289426  846424 pod_ready.go:94] pod "etcd-addons-459729" is "Ready"
	I1026 14:16:32.289447  846424 pod_ready.go:86] duration metric: took 3.67723ms for pod "etcd-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.291471  846424 pod_ready.go:83] waiting for pod "kube-apiserver-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.295066  846424 pod_ready.go:94] pod "kube-apiserver-addons-459729" is "Ready"
	I1026 14:16:32.295090  846424 pod_ready.go:86] duration metric: took 3.601221ms for pod "kube-apiserver-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.297016  846424 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.680183  846424 pod_ready.go:94] pod "kube-controller-manager-addons-459729" is "Ready"
	I1026 14:16:32.680220  846424 pod_ready.go:86] duration metric: took 383.185277ms for pod "kube-controller-manager-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.879387  846424 pod_ready.go:83] waiting for pod "kube-proxy-2f7sr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.279767  846424 pod_ready.go:94] pod "kube-proxy-2f7sr" is "Ready"
	I1026 14:16:33.279836  846424 pod_ready.go:86] duration metric: took 400.42041ms for pod "kube-proxy-2f7sr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.480448  846424 pod_ready.go:83] waiting for pod "kube-scheduler-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.880276  846424 pod_ready.go:94] pod "kube-scheduler-addons-459729" is "Ready"
	I1026 14:16:33.880305  846424 pod_ready.go:86] duration metric: took 399.829511ms for pod "kube-scheduler-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.880320  846424 pod_ready.go:40] duration metric: took 1.604687476s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:16:33.928054  846424 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 14:16:33.930783  846424 out.go:179] * Done! kubectl is now configured to use "addons-459729" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 14:18:06 addons-459729 crio[770]: time="2025-10-26T14:18:06.088607706Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=dc62dd71-7e5d-49e7-804c-e0ec3eebd1c6 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:18:24 addons-459729 crio[770]: time="2025-10-26T14:18:24.395127842Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:18:55 addons-459729 crio[770]: time="2025-10-26T14:18:55.108587935Z" level=info msg="Trying to access \"docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8\""
	Oct 26 14:19:25 addons-459729 crio[770]: time="2025-10-26T14:19:25.75806431Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=409ab72c-5bcc-4291-a966-10c538f2b8cc name=/runtime.v1.ImageService/PullImage
	Oct 26 14:19:25 addons-459729 crio[770]: time="2025-10-26T14:19:25.773088359Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 26 14:19:56 addons-459729 crio[770]: time="2025-10-26T14:19:56.435676799Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 26 14:20:27 addons-459729 crio[770]: time="2025-10-26T14:20:27.079986155Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=1b093616-a7dc-4aac-bafc-76a2e11a536f name=/runtime.v1.ImageService/PullImage
	Oct 26 14:20:27 addons-459729 crio[770]: time="2025-10-26T14:20:27.084099863Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 26 14:20:27 addons-459729 crio[770]: time="2025-10-26T14:20:27.36464145Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=f327f6b6-df0d-4a0d-bee6-adb70108a67f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:27 addons-459729 crio[770]: time="2025-10-26T14:20:27.364839244Z" level=info msg="Image docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 not found" id=f327f6b6-df0d-4a0d-bee6-adb70108a67f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:27 addons-459729 crio[770]: time="2025-10-26T14:20:27.364882296Z" level=info msg="Neither image nor artfiact docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 found" id=f327f6b6-df0d-4a0d-bee6-adb70108a67f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:42 addons-459729 crio[770]: time="2025-10-26T14:20:42.087589177Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=0f3c14b8-0d26-4ed3-9ea4-e76e73d03a76 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:42 addons-459729 crio[770]: time="2025-10-26T14:20:42.087815504Z" level=info msg="Image docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 not found" id=0f3c14b8-0d26-4ed3-9ea4-e76e73d03a76 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:20:42 addons-459729 crio[770]: time="2025-10-26T14:20:42.087872021Z" level=info msg="Neither image nor artfiact docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 found" id=0f3c14b8-0d26-4ed3-9ea4-e76e73d03a76 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:21:12 addons-459729 crio[770]: time="2025-10-26T14:21:12.067436435Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 26 14:21:42 addons-459729 crio[770]: time="2025-10-26T14:21:42.726799075Z" level=info msg="Pulling image: docker.io/nginx:latest" id=b6e710d0-3a93-4dc9-9f71-182aeb800e9e name=/runtime.v1.ImageService/PullImage
	Oct 26 14:21:42 addons-459729 crio[770]: time="2025-10-26T14:21:42.731407769Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:21:57 addons-459729 crio[770]: time="2025-10-26T14:21:57.087342761Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7a15b32e-5427-4d25-93dd-fb37baddab5c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:21:57 addons-459729 crio[770]: time="2025-10-26T14:21:57.087527648Z" level=info msg="Image docker.io/nginx:alpine not found" id=7a15b32e-5427-4d25-93dd-fb37baddab5c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:21:57 addons-459729 crio[770]: time="2025-10-26T14:21:57.087577833Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=7a15b32e-5427-4d25-93dd-fb37baddab5c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:22:11 addons-459729 crio[770]: time="2025-10-26T14:22:11.087632039Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=82dddf71-2445-4531-b861-2b4dddee1b41 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:22:11 addons-459729 crio[770]: time="2025-10-26T14:22:11.087829351Z" level=info msg="Image docker.io/nginx:alpine not found" id=82dddf71-2445-4531-b861-2b4dddee1b41 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:22:11 addons-459729 crio[770]: time="2025-10-26T14:22:11.087874122Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=82dddf71-2445-4531-b861-2b4dddee1b41 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:22:13 addons-459729 crio[770]: time="2025-10-26T14:22:13.385704852Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:22:44 addons-459729 crio[770]: time="2025-10-26T14:22:44.062377726Z" level=info msg="Trying to access \"docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	27b70ccf2a2bc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          6 minutes ago       Running             busybox                                  0                   4df2b4b18d117       busybox                                     default
	19aef1ec8510c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          6 minutes ago       Running             csi-snapshotter                          0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	61a5097a66804       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	621ed44d4d0c9       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	423188941aea4       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             6 minutes ago       Running             controller                               0                   5f8435b6e04f2       ingress-nginx-controller-675c5ddd98-5ppwr   ingress-nginx
	441d937b8068c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 6 minutes ago       Running             gcp-auth                                 0                   323c55def826a       gcp-auth-78565c9fb4-5728j                   gcp-auth
	0957c0a36894a       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	066ff52c2ddcd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            6 minutes ago       Running             gadget                                   0                   4eb2ecaed9e87       gadget-kzxfz                                gadget
	3552d128c67c5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                6 minutes ago       Running             node-driver-registrar                    0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	97c4cd86f30ed       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             6 minutes ago       Exited              patch                                    2                   abda503e132df       ingress-nginx-admission-patch-tpf9p         ingress-nginx
	e0688bdc55e0b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              6 minutes ago       Running             registry-proxy                           0                   e7362f18db413       registry-proxy-cs2k2                        kube-system
	0f54646dd806e       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     6 minutes ago       Running             nvidia-device-plugin-ctr                 0                   c4c36c0bc4659       nvidia-device-plugin-daemonset-24shm        kube-system
	83682e4a110f1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	ea6861a45ac70       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     6 minutes ago       Running             amd-gpu-device-plugin                    0                   c6d4e2f783cad       amd-gpu-device-plugin-cpl45                 kube-system
	0314c0bc382ed       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   00d15442e9fe3       snapshot-controller-7d9fbc56b8-d9lzl        kube-system
	8362d34d3550e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   6 minutes ago       Exited              create                                   0                   f3bf9fde8769c       ingress-nginx-admission-create-6rf28        ingress-nginx
	12266be6b9ab3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   9ef34a2a027ac       snapshot-controller-7d9fbc56b8-wrh9q        kube-system
	7c8dc6d14b139       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   7976607f84d97       csi-hostpath-attacher-0                     kube-system
	e712266799f11       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           7 minutes ago       Running             registry                                 0                   f1316c3452f72       registry-6b586f9694-ds6k9                   kube-system
	c3bf40d60ab5e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   2cae445dde2d5       csi-hostpath-resizer-0                      kube-system
	b63192b7f745f       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              7 minutes ago       Running             yakd                                     0                   2c72fc205123b       yakd-dashboard-5ff678cb9-dn24s              yakd-dashboard
	c19ddca298d1e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   1bf1c34fc4541       local-path-provisioner-648f6765c9-zlb8q     local-path-storage
	1c530a50ccecc       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               7 minutes ago       Running             cloud-spanner-emulator                   0                   dfaf3d25c7f4b       cloud-spanner-emulator-86bd5cbb97-xfwfj     default
	db7c2a98e81df       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               7 minutes ago       Running             minikube-ingress-dns                     0                   52ca272c9227c       kube-ingress-dns-minikube                   kube-system
	9bd2912e692dc       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        7 minutes ago       Running             metrics-server                           0                   7b5aa0bab6500       metrics-server-85b7d694d7-g2nwm             kube-system
	ea11dd25ee99e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             7 minutes ago       Running             coredns                                  0                   b9bf05c027e23       coredns-66bc5c9577-58kmh                    kube-system
	6ec65c531ce9b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago       Running             storage-provisioner                      0                   7e2edd03c74dd       storage-provisioner                         kube-system
	4f25f66b4cedf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             7 minutes ago       Running             kube-proxy                               0                   a6c25e9b56e3a       kube-proxy-2f7sr                            kube-system
	a0eba15d448be       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             7 minutes ago       Running             kindnet-cni                              0                   84e022be55df3       kindnet-qskcd                               kube-system
	c2b16514601ac       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             8 minutes ago       Running             kube-controller-manager                  0                   b6986a1a2b4b0       kube-controller-manager-addons-459729       kube-system
	102e7dda91245       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             8 minutes ago       Running             kube-scheduler                           0                   79e5b59eeb1c5       kube-scheduler-addons-459729                kube-system
	4150a83c0db93       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   d6e35f5ca53c8       etcd-addons-459729                          kube-system
	7a9a679c5c891       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             8 minutes ago       Running             kube-apiserver                           0                   d283821e23e4a       kube-apiserver-addons-459729                kube-system
	
	
	==> coredns [ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b] <==
	[INFO] 10.244.0.17:43132 - 21631 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000122886s
	[INFO] 10.244.0.17:55984 - 62108 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000085792s
	[INFO] 10.244.0.17:55984 - 62274 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000172297s
	[INFO] 10.244.0.17:59534 - 46029 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000085485s
	[INFO] 10.244.0.17:59534 - 45635 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000130065s
	[INFO] 10.244.0.17:35492 - 64690 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118967s
	[INFO] 10.244.0.17:35492 - 64268 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152403s
	[INFO] 10.244.0.21:54006 - 22748 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194069s
	[INFO] 10.244.0.21:45352 - 54900 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00026904s
	[INFO] 10.244.0.21:38334 - 25222 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129109s
	[INFO] 10.244.0.21:34539 - 64672 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000226506s
	[INFO] 10.244.0.21:59972 - 30687 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125292s
	[INFO] 10.244.0.21:34145 - 41111 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153861s
	[INFO] 10.244.0.21:52994 - 11684 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003138228s
	[INFO] 10.244.0.21:36916 - 32432 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004561076s
	[INFO] 10.244.0.21:50024 - 33145 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.003880565s
	[INFO] 10.244.0.21:48825 - 39484 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.0061693s
	[INFO] 10.244.0.21:56944 - 27445 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004052333s
	[INFO] 10.244.0.21:39046 - 54424 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005945025s
	[INFO] 10.244.0.21:51579 - 13184 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004308646s
	[INFO] 10.244.0.21:39799 - 50681 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005368105s
	[INFO] 10.244.0.21:57974 - 51048 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001082611s
	[INFO] 10.244.0.21:51671 - 13280 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001179932s
	[INFO] 10.244.0.26:58819 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000293264s
	[INFO] 10.244.0.26:35243 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189185s
	
	
	==> describe nodes <==
	Name:               addons-459729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-459729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=addons-459729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_14_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-459729
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-459729"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:14:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-459729
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:22:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:21:54 +0000   Sun, 26 Oct 2025 14:14:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:21:54 +0000   Sun, 26 Oct 2025 14:14:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:21:54 +0000   Sun, 26 Oct 2025 14:14:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:21:54 +0000   Sun, 26 Oct 2025 14:15:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-459729
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f0596a61-354d-402e-9406-4163a5db7e7d
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  default                     cloud-spanner-emulator-86bd5cbb97-xfwfj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-kzxfz                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  gcp-auth                    gcp-auth-78565c9fb4-5728j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m50s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-5ppwr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         7m57s
	  kube-system                 amd-gpu-device-plugin-cpl45                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 coredns-66bc5c9577-58kmh                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m59s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 csi-hostpathplugin-86x7s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 etcd-addons-459729                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m5s
	  kube-system                 kindnet-qskcd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m59s
	  kube-system                 kube-apiserver-addons-459729                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m4s
	  kube-system                 kube-controller-manager-addons-459729        200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m4s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-proxy-2f7sr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-scheduler-addons-459729                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m4s
	  kube-system                 metrics-server-85b7d694d7-g2nwm              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m57s
	  kube-system                 nvidia-device-plugin-daemonset-24shm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 registry-6b586f9694-ds6k9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 registry-creds-764b6fb674-dk4lc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 registry-proxy-cs2k2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 snapshot-controller-7d9fbc56b8-d9lzl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 snapshot-controller-7d9fbc56b8-wrh9q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  local-path-storage          local-path-provisioner-648f6765c9-zlb8q      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-dn24s               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     7m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m57s  kube-proxy       
	  Normal  Starting                 8m4s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m4s   kubelet          Node addons-459729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m4s   kubelet          Node addons-459729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m4s   kubelet          Node addons-459729 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m     node-controller  Node addons-459729 event: Registered Node addons-459729 in Controller
	  Normal  NodeReady                7m17s  kubelet          Node addons-459729 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a] <==
	{"level":"warn","ts":"2025-10-26T14:15:04.383906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:04.391461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.725132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.731839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.749203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.756191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:06.673503Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.362269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:06.673597Z","caller":"traceutil/trace.go:172","msg":"trace[1407356744] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1094; }","duration":"162.491663ms","start":"2025-10-26T14:16:06.511089Z","end":"2025-10-26T14:16:06.673580Z","steps":["trace[1407356744] 'agreement among raft nodes before linearized reading'  (duration: 44.429446ms)","trace[1407356744] 'range keys from in-memory index tree'  (duration: 117.89894ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:06.675114Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.362867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040893471723429 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-nt254\" mod_revision:1091 > success:<request_put:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-nt254\" value_size:4081 >> failure:<request_range:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-nt254\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T14:16:06.675395Z","caller":"traceutil/trace.go:172","msg":"trace[1217944354] linearizableReadLoop","detail":"{readStateIndex:1125; appliedIndex:1124; }","duration":"119.89607ms","start":"2025-10-26T14:16:06.555484Z","end":"2025-10-26T14:16:06.675380Z","steps":["trace[1217944354] 'read index received'  (duration: 18.538µs)","trace[1217944354] 'applied index is now lower than readState.Index'  (duration: 119.876207ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T14:16:06.675424Z","caller":"traceutil/trace.go:172","msg":"trace[470063816] transaction","detail":"{read_only:false; response_revision:1095; number_of_response:1; }","duration":"196.815195ms","start":"2025-10-26T14:16:06.478586Z","end":"2025-10-26T14:16:06.675401Z","steps":["trace[470063816] 'process raft request'  (duration: 76.965623ms)","trace[470063816] 'compare'  (duration: 117.805435ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:06.675523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.37334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:06.675739Z","caller":"traceutil/trace.go:172","msg":"trace[1813938213] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"164.587149ms","start":"2025-10-26T14:16:06.511134Z","end":"2025-10-26T14:16:06.675722Z","steps":["trace[1813938213] 'agreement among raft nodes before linearized reading'  (duration: 164.337405ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T14:16:06.839498Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.209135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-26T14:16:06.839705Z","caller":"traceutil/trace.go:172","msg":"trace[1627999609] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"157.511557ms","start":"2025-10-26T14:16:06.682155Z","end":"2025-10-26T14:16:06.839666Z","steps":["trace[1627999609] 'process raft request'  (duration: 113.355136ms)","trace[1627999609] 'compare'  (duration: 43.875174ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T14:16:06.839935Z","caller":"traceutil/trace.go:172","msg":"trace[222252180] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1095; }","duration":"134.546756ms","start":"2025-10-26T14:16:06.705111Z","end":"2025-10-26T14:16:06.839657Z","steps":["trace[222252180] 'agreement among raft nodes before linearized reading'  (duration: 90.30554ms)","trace[222252180] 'range keys from in-memory index tree'  (duration: 43.778269ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:10.550138Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.904128ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040893471723491 > lease_revoke:<id:70cc9a20df3b0e67>","response":"size:29"}
	{"level":"info","ts":"2025-10-26T14:16:10.550263Z","caller":"traceutil/trace.go:172","msg":"trace[486118013] linearizableReadLoop","detail":"{readStateIndex:1137; appliedIndex:1136; }","duration":"110.54143ms","start":"2025-10-26T14:16:10.439705Z","end":"2025-10-26T14:16:10.550246Z","steps":["trace[486118013] 'read index received'  (duration: 38.875µs)","trace[486118013] 'applied index is now lower than readState.Index'  (duration: 110.501597ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:10.550396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.681137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:10.550436Z","caller":"traceutil/trace.go:172","msg":"trace[1478287923] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1106; }","duration":"110.733039ms","start":"2025-10-26T14:16:10.439691Z","end":"2025-10-26T14:16:10.550424Z","steps":["trace[1478287923] 'agreement among raft nodes before linearized reading'  (duration: 110.638778ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:16:16.552003Z","caller":"traceutil/trace.go:172","msg":"trace[1516111140] linearizableReadLoop","detail":"{readStateIndex:1173; appliedIndex:1173; }","duration":"112.70591ms","start":"2025-10-26T14:16:16.439268Z","end":"2025-10-26T14:16:16.551974Z","steps":["trace[1516111140] 'read index received'  (duration: 112.69236ms)","trace[1516111140] 'applied index is now lower than readState.Index'  (duration: 11.711µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:16.552177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.877721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:16.552215Z","caller":"traceutil/trace.go:172","msg":"trace[102515432] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1141; }","duration":"112.949469ms","start":"2025-10-26T14:16:16.439258Z","end":"2025-10-26T14:16:16.552208Z","steps":["trace[102515432] 'agreement among raft nodes before linearized reading'  (duration: 112.841453ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:16:16.552194Z","caller":"traceutil/trace.go:172","msg":"trace[1700144795] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"131.964726ms","start":"2025-10-26T14:16:16.420209Z","end":"2025-10-26T14:16:16.552174Z","steps":["trace[1700144795] 'process raft request'  (duration: 131.800273ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:16:16.701205Z","caller":"traceutil/trace.go:172","msg":"trace[808680941] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"143.197143ms","start":"2025-10-26T14:16:16.557989Z","end":"2025-10-26T14:16:16.701187Z","steps":["trace[808680941] 'process raft request'  (duration: 143.039766ms)"],"step_count":1}
	
	
	==> gcp-auth [441d937b8068cc86fcb3a873cae9bcb6e3f4a3e79071a803935c38b3f14746aa] <==
	2025/10/26 14:16:19 GCP Auth Webhook started!
	2025/10/26 14:16:34 Ready to marshal response ...
	2025/10/26 14:16:34 Ready to write response ...
	2025/10/26 14:16:34 Ready to marshal response ...
	2025/10/26 14:16:34 Ready to write response ...
	2025/10/26 14:16:34 Ready to marshal response ...
	2025/10/26 14:16:34 Ready to write response ...
	2025/10/26 14:16:43 Ready to marshal response ...
	2025/10/26 14:16:43 Ready to write response ...
	2025/10/26 14:16:43 Ready to marshal response ...
	2025/10/26 14:16:43 Ready to write response ...
	2025/10/26 14:16:51 Ready to marshal response ...
	2025/10/26 14:16:51 Ready to write response ...
	2025/10/26 14:16:51 Ready to marshal response ...
	2025/10/26 14:16:51 Ready to write response ...
	2025/10/26 14:16:52 Ready to marshal response ...
	2025/10/26 14:16:52 Ready to write response ...
	2025/10/26 14:16:58 Ready to marshal response ...
	2025/10/26 14:16:58 Ready to write response ...
	
	
	==> kernel <==
	 14:23:00 up  2:05,  0 user,  load average: 0.38, 0.43, 0.94
	Linux addons-459729 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702] <==
	I1026 14:20:52.854758       1 main.go:301] handling current node
	I1026 14:21:02.856070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:21:02.856108       1 main.go:301] handling current node
	I1026 14:21:12.854968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:21:12.855002       1 main.go:301] handling current node
	I1026 14:21:22.856097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:21:22.856134       1 main.go:301] handling current node
	I1026 14:21:32.863295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:21:32.863327       1 main.go:301] handling current node
	I1026 14:21:42.857467       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:21:42.857499       1 main.go:301] handling current node
	I1026 14:21:52.859260       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:21:52.859304       1 main.go:301] handling current node
	I1026 14:22:02.854954       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:22:02.854990       1 main.go:301] handling current node
	I1026 14:22:12.855150       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:22:12.855217       1 main.go:301] handling current node
	I1026 14:22:22.859276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:22:22.859326       1 main.go:301] handling current node
	I1026 14:22:32.859104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:22:32.859139       1 main.go:301] handling current node
	I1026 14:22:42.854261       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:22:42.854297       1 main.go:301] handling current node
	I1026 14:22:52.854241       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:22:52.854282       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321] <==
	E1026 14:15:47.285479       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 14:15:47.285903       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.72.119:443: connect: connection refused" logger="UnhandledError"
	W1026 14:15:48.288076       1 handler_proxy.go:99] no RequestInfo found in the context
	W1026 14:15:48.288110       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:15:48.288150       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 14:15:48.288194       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1026 14:15:48.288197       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 14:15:48.289343       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 14:15:52.296856       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:15:52.296916       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 14:15:52.297001       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1026 14:15:52.305409       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 14:16:40.620694       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43486: use of closed network connection
	E1026 14:16:40.776236       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43516: use of closed network connection
	I1026 14:16:51.862047       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 14:16:52.188280       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.229.60"}
	
	
	==> kube-controller-manager [c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c] <==
	I1026 14:15:00.709179       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:15:00.709318       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:15:00.709123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 14:15:00.709809       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:15:00.711675       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:00.711687       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:15:00.713528       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 14:15:00.714754       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:00.716565       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 14:15:00.716650       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 14:15:00.716691       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 14:15:00.716697       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 14:15:00.716703       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 14:15:00.717979       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 14:15:00.723523       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-459729" podCIDRs=["10.244.0.0/24"]
	I1026 14:15:00.729033       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 14:15:03.029718       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1026 14:15:30.716451       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 14:15:30.716588       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1026 14:15:30.716640       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 14:15:30.737460       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1026 14:15:30.741504       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1026 14:15:30.817050       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:30.842433       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:15:45.647726       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78] <==
	I1026 14:15:02.702855       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:15:02.996001       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:15:03.096217       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:15:03.096266       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:15:03.096360       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:15:03.183548       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:15:03.183613       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:15:03.194275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:15:03.197537       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:15:03.197760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:15:03.199843       1 config.go:200] "Starting service config controller"
	I1026 14:15:03.200789       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:15:03.200404       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:15:03.200979       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:15:03.200421       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:15:03.200995       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:15:03.201046       1 config.go:309] "Starting node config controller"
	I1026 14:15:03.201051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:15:03.201056       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:15:03.301636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:15:03.301650       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:15:03.301679       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed] <==
	E1026 14:14:53.714707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 14:14:53.714721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 14:14:53.714925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:14:53.715056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:14:53.715252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:14:53.715267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:14:53.715339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 14:14:53.715410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:14:53.715473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:14:53.715543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:14:53.715570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:14:53.716376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 14:14:54.598092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:14:54.611465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:14:54.687609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 14:14:54.701877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:14:54.779666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:14:54.787848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 14:14:54.799124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:14:54.827579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:14:54.851711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 14:14:54.882786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:14:54.883667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 14:14:54.953839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1026 14:14:57.411028       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:18:49 addons-459729 kubelet[1307]: I1026 14:18:49.086255    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cpl45" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:19:25 addons-459729 kubelet[1307]: E1026 14:19:25.757523    1307 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 26 14:19:25 addons-459729 kubelet[1307]: E1026 14:19:25.757601    1307 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 26 14:19:25 addons-459729 kubelet[1307]: E1026 14:19:25.757871    1307 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(4bad36d2-59a2-4ff8-b30a-5b4bfd7f204f): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 26 14:19:25 addons-459729 kubelet[1307]: E1026 14:19:25.757959    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="4bad36d2-59a2-4ff8-b30a-5b4bfd7f204f"
	Oct 26 14:19:26 addons-459729 kubelet[1307]: E1026 14:19:26.158291    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="4bad36d2-59a2-4ff8-b30a-5b4bfd7f204f"
	Oct 26 14:19:38 addons-459729 kubelet[1307]: I1026 14:19:38.086108    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-24shm" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:19:57 addons-459729 kubelet[1307]: I1026 14:19:57.086218    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cs2k2" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:14 addons-459729 kubelet[1307]: I1026 14:20:14.086953    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cpl45" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:27 addons-459729 kubelet[1307]: E1026 14:20:27.079470    1307 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585cc
d7d605"
	Oct 26 14:20:27 addons-459729 kubelet[1307]: E1026 14:20:27.079548    1307 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605"
	Oct 26 14:20:27 addons-459729 kubelet[1307]: E1026 14:20:27.079756    1307 kuberuntime_manager.go:1449] "Unhandled Error" err="container registry-creds start failed in pod registry-creds-764b6fb674-dk4lc_kube-system(11a2adc0-f603-426f-af30-919a48eee4bc): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="Unh
andledError"
	Oct 26 14:20:27 addons-459729 kubelet[1307]: E1026 14:20:27.079826    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-creds-764b6fb674-dk4lc" podUID="11a2adc0-f603-42
6f-af30-919a48eee4bc"
	Oct 26 14:20:27 addons-459729 kubelet[1307]: E1026 14:20:27.365240    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 in docker.io/upmcenterprises/registry-creds: toomanyrequests: You hav
e reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-creds-764b6fb674-dk4lc" podUID="11a2adc0-f603-426f-af30-919a48eee4bc"
	Oct 26 14:20:45 addons-459729 kubelet[1307]: I1026 14:20:45.086993    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-24shm" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:21:01 addons-459729 kubelet[1307]: I1026 14:21:01.086412    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cs2k2" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:21:21 addons-459729 kubelet[1307]: I1026 14:21:21.086836    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cpl45" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:21:42 addons-459729 kubelet[1307]: E1026 14:21:42.726327    1307 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 26 14:21:42 addons-459729 kubelet[1307]: E1026 14:21:42.726395    1307 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 26 14:21:42 addons-459729 kubelet[1307]: E1026 14:21:42.726640    1307 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(d99505c9-bb9c-4c52-90e0-9ab7033b32bf): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 26 14:21:42 addons-459729 kubelet[1307]: E1026 14:21:42.726713    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d99505c9-bb9c-4c52-90e0-9ab7033b32bf"
	Oct 26 14:21:57 addons-459729 kubelet[1307]: E1026 14:21:57.088040    1307 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="d99505c9-bb9c-4c52-90e0-9ab7033b32bf"
	Oct 26 14:22:09 addons-459729 kubelet[1307]: I1026 14:22:09.086942    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-24shm" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:22:24 addons-459729 kubelet[1307]: I1026 14:22:24.086732    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cs2k2" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:22:45 addons-459729 kubelet[1307]: I1026 14:22:45.086405    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-cpl45" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539] <==
	W1026 14:22:35.651620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:37.656158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:37.660607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:39.664378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:39.668390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:41.671799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:41.675834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:43.678675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:43.684558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:45.688565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:45.694210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:47.697346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:47.703227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:49.706692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:49.711746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:51.715289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:51.719664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:53.723858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:53.728047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:55.732125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:55.736603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:57.740099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:57.745562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:59.750093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:59.755194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-459729 -n addons-459729
helpers_test.go:269: (dbg) Run:  kubectl --context addons-459729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-459729 describe pod nginx task-pv-pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-459729 describe pod nginx task-pv-pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc: exit status 1 (74.473734ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-459729/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:16:52 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwdp7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwdp7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  6m9s                default-scheduler  Successfully assigned default/nginx to addons-459729
	  Warning  Failed     79s (x2 over 5m8s)  kubelet            Failed to pull image "docker.io/nginx:alpine": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     79s (x2 over 5m8s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    64s (x2 over 5m8s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     64s (x2 over 5m8s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    50s (x3 over 6m9s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-459729/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:16:58 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhr62 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-vhr62:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m3s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-459729
	  Warning  Failed     3m36s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m36s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    3m35s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     3m35s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3m20s (x2 over 6m2s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6rf28" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tpf9p" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-dk4lc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-459729 describe pod nginx task-pv-pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (253.155016ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:23:01.660521  862118 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:23:01.660792  862118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:23:01.660801  862118 out.go:374] Setting ErrFile to fd 2...
	I1026 14:23:01.660805  862118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:23:01.661012  862118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:23:01.661298  862118 mustload.go:65] Loading cluster: addons-459729
	I1026 14:23:01.661627  862118 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:23:01.661643  862118 addons.go:606] checking whether the cluster is paused
	I1026 14:23:01.661725  862118 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:23:01.661736  862118 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:23:01.662115  862118 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:23:01.679754  862118 ssh_runner.go:195] Run: systemctl --version
	I1026 14:23:01.679816  862118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:23:01.697254  862118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:23:01.796060  862118 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:23:01.796156  862118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:23:01.826235  862118 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:23:01.826261  862118 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:23:01.826266  862118 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:23:01.826270  862118 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:23:01.826273  862118 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:23:01.826278  862118 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:23:01.826282  862118 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:23:01.826286  862118 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:23:01.826290  862118 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:23:01.826299  862118 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:23:01.826303  862118 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:23:01.826317  862118 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:23:01.826327  862118 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:23:01.826331  862118 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:23:01.826336  862118 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:23:01.826351  862118 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:23:01.826359  862118 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:23:01.826366  862118 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:23:01.826369  862118 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:23:01.826372  862118 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:23:01.826376  862118 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:23:01.826379  862118 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:23:01.826383  862118 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:23:01.826387  862118 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:23:01.826392  862118 cri.go:89] found id: ""
	I1026 14:23:01.826441  862118 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:23:01.841740  862118 out.go:203] 
	W1026 14:23:01.843154  862118 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:23:01.843203  862118 out.go:285] * 
	* 
	W1026 14:23:01.848073  862118 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:23:01.849432  862118 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (253.834567ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:23:01.911200  862182 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:23:01.911470  862182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:23:01.911481  862182 out.go:374] Setting ErrFile to fd 2...
	I1026 14:23:01.911487  862182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:23:01.911743  862182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:23:01.912033  862182 mustload.go:65] Loading cluster: addons-459729
	I1026 14:23:01.912441  862182 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:23:01.912461  862182 addons.go:606] checking whether the cluster is paused
	I1026 14:23:01.912570  862182 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:23:01.912595  862182 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:23:01.913020  862182 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:23:01.930940  862182 ssh_runner.go:195] Run: systemctl --version
	I1026 14:23:01.930996  862182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:23:01.948537  862182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:23:02.048136  862182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:23:02.048230  862182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:23:02.077937  862182 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:23:02.077963  862182 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:23:02.077969  862182 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:23:02.077973  862182 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:23:02.077976  862182 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:23:02.077981  862182 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:23:02.077985  862182 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:23:02.077989  862182 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:23:02.077993  862182 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:23:02.078010  862182 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:23:02.078019  862182 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:23:02.078023  862182 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:23:02.078027  862182 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:23:02.078031  862182 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:23:02.078036  862182 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:23:02.078049  862182 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:23:02.078054  862182 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:23:02.078060  862182 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:23:02.078064  862182 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:23:02.078067  862182 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:23:02.078071  862182 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:23:02.078075  862182 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:23:02.078079  862182 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:23:02.078083  862182 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:23:02.078093  862182 cri.go:89] found id: ""
	I1026 14:23:02.078141  862182 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:23:02.096808  862182 out.go:203] 
	W1026 14:23:02.098150  862182 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:23:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:23:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:23:02.098201  862182 out.go:285] * 
	* 
	W1026 14:23:02.102821  862182 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:23:02.104396  862182 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (369.45s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-459729 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-459729 --alsologtostderr -v=1: exit status 11 (269.560311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:41.114633  854848 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:41.114954  854848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:41.114968  854848 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:41.114974  854848 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:41.115321  854848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:41.115728  854848 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:41.116294  854848 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:41.116324  854848 addons.go:606] checking whether the cluster is paused
	I1026 14:16:41.116478  854848 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:41.116499  854848 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:41.117059  854848 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:41.135967  854848 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:41.136032  854848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:41.154155  854848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:41.254518  854848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:41.254610  854848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:41.285208  854848 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:41.285241  854848 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:41.285246  854848 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:41.285249  854848 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:41.285252  854848 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:41.285256  854848 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:41.285259  854848 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:41.285261  854848 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:41.285264  854848 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:41.285274  854848 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:41.285277  854848 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:41.285279  854848 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:41.285281  854848 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:41.285284  854848 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:41.285286  854848 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:41.285298  854848 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:41.285305  854848 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:41.285309  854848 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:41.285312  854848 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:41.285314  854848 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:41.285317  854848 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:41.285319  854848 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:41.285322  854848 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:41.285324  854848 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:41.285327  854848 cri.go:89] found id: ""
	I1026 14:16:41.285378  854848 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:41.300181  854848 out.go:203] 
	W1026 14:16:41.301462  854848 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:41.301498  854848 out.go:285] * 
	* 
	W1026 14:16:41.306551  854848 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:41.307649  854848 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-459729 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-459729
helpers_test.go:243: (dbg) docker inspect addons-459729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99",
	        "Created": "2025-10-26T14:14:40.52606534Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 847075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:14:40.558709556Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/hosts",
	        "LogPath": "/var/lib/docker/containers/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99/fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99-json.log",
	        "Name": "/addons-459729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-459729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-459729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc6e75fab9c5724b831e93e0ad2a93d91d49dd1e164485d8b27b314fbc5e0b99",
	                "LowerDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be283a9f8cd9ccd9baac09b427be1213a6b5c9cded6ad57cc7c2dd84f70df753/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-459729",
	                "Source": "/var/lib/docker/volumes/addons-459729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-459729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-459729",
	                "name.minikube.sigs.k8s.io": "addons-459729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27cac62847effb19906009c5979fe40bbf685a449ce5b4deb39ded6dddff8b6f",
	            "SandboxKey": "/var/run/docker/netns/27cac62847ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-459729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:b4:86:17:1e:a3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "35dc3def6cc813d1d5c906424df9f8355bd88f05b16bb1826e9958e3c782a1a4",
	                    "EndpointID": "3162d9d223ad2c1fef671da2ec9c0200d2ce47e2eeda4daaba75d1967d709ae6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-459729",
	                        "fc6e75fab9c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-459729 -n addons-459729
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-459729 logs -n 25: (1.199752222s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-313763 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-313763   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-313763                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-313763   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ -o=json --download-only -p download-only-008452 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-008452   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-008452                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-008452   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-313763                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-313763   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-008452                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-008452   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ --download-only -p download-docker-939440 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-939440 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ -p download-docker-939440                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-939440 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ --download-only -p binary-mirror-114305 --alsologtostderr --binary-mirror http://127.0.0.1:44689 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-114305   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ -p binary-mirror-114305                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-114305   │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ addons  │ enable dashboard -p addons-459729                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-459729                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ start   │ -p addons-459729 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:16 UTC │
	│ addons  │ addons-459729 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ addons-459729 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-459729 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-459729          │ jenkins │ v1.37.0 │ 26 Oct 25 14:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:17.112515  846424 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:17.112795  846424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:17.112803  846424 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:17.112807  846424 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:17.112990  846424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:14:17.113534  846424 out.go:368] Setting JSON to false
	I1026 14:14:17.114463  846424 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7005,"bootTime":1761481052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:14:17.114570  846424 start.go:141] virtualization: kvm guest
	I1026 14:14:17.116382  846424 out.go:179] * [addons-459729] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:14:17.117587  846424 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:14:17.117592  846424 notify.go:220] Checking for updates...
	I1026 14:14:17.118732  846424 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:17.119875  846424 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:14:17.121054  846424 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:14:17.122198  846424 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:14:17.123215  846424 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:14:17.124682  846424 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:17.149310  846424 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:14:17.149487  846424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:17.207621  846424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 14:14:17.197494844 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:17.207741  846424 docker.go:318] overlay module found
	I1026 14:14:17.209500  846424 out.go:179] * Using the docker driver based on user configuration
	I1026 14:14:17.210611  846424 start.go:305] selected driver: docker
	I1026 14:14:17.210627  846424 start.go:925] validating driver "docker" against <nil>
	I1026 14:14:17.210642  846424 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:14:17.211282  846424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:17.265537  846424 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 14:14:17.255623393 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:17.265767  846424 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:17.266017  846424 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:14:17.268242  846424 out.go:179] * Using Docker driver with root privileges
	I1026 14:14:17.269488  846424 cni.go:84] Creating CNI manager for ""
	I1026 14:14:17.269559  846424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:17.269572  846424 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 14:14:17.269643  846424 start.go:349] cluster config:
	{Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1026 14:14:17.270969  846424 out.go:179] * Starting "addons-459729" primary control-plane node in "addons-459729" cluster
	I1026 14:14:17.272134  846424 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 14:14:17.273402  846424 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 14:14:17.274551  846424 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:17.274581  846424 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 14:14:17.274602  846424 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 14:14:17.274611  846424 cache.go:58] Caching tarball of preloaded images
	I1026 14:14:17.274710  846424 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 14:14:17.274721  846424 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 14:14:17.275086  846424 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/config.json ...
	I1026 14:14:17.275112  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/config.json: {Name:mk9529b624fed8d03806b178f8e915dee8aa0e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:17.292287  846424 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 14:14:17.292466  846424 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 14:14:17.292494  846424 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 14:14:17.292500  846424 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 14:14:17.292513  846424 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 14:14:17.292520  846424 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1026 14:14:29.432150  846424 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1026 14:14:29.432207  846424 cache.go:232] Successfully downloaded all kic artifacts
	I1026 14:14:29.432255  846424 start.go:360] acquireMachinesLock for addons-459729: {Name:mk6d98d5da8e9c6ee516b00ba1c75ff50ea84eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 14:14:29.432358  846424 start.go:364] duration metric: took 82.777µs to acquireMachinesLock for "addons-459729"
	I1026 14:14:29.432384  846424 start.go:93] Provisioning new machine with config: &{Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:14:29.432464  846424 start.go:125] createHost starting for "" (driver="docker")
	I1026 14:14:29.434070  846424 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 14:14:29.434326  846424 start.go:159] libmachine.API.Create for "addons-459729" (driver="docker")
	I1026 14:14:29.434382  846424 client.go:168] LocalClient.Create starting
	I1026 14:14:29.434474  846424 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 14:14:29.636359  846424 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 14:14:29.991463  846424 cli_runner.go:164] Run: docker network inspect addons-459729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 14:14:30.008472  846424 cli_runner.go:211] docker network inspect addons-459729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 14:14:30.008584  846424 network_create.go:284] running [docker network inspect addons-459729] to gather additional debugging logs...
	I1026 14:14:30.008611  846424 cli_runner.go:164] Run: docker network inspect addons-459729
	W1026 14:14:30.026519  846424 cli_runner.go:211] docker network inspect addons-459729 returned with exit code 1
	I1026 14:14:30.026548  846424 network_create.go:287] error running [docker network inspect addons-459729]: docker network inspect addons-459729: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-459729 not found
	I1026 14:14:30.026559  846424 network_create.go:289] output of [docker network inspect addons-459729]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-459729 not found
	
	** /stderr **
	I1026 14:14:30.026678  846424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:14:30.043803  846424 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021485f0}
	I1026 14:14:30.043866  846424 network_create.go:124] attempt to create docker network addons-459729 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 14:14:30.043913  846424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-459729 addons-459729
	I1026 14:14:30.100466  846424 network_create.go:108] docker network addons-459729 192.168.49.0/24 created
	I1026 14:14:30.100509  846424 kic.go:121] calculated static IP "192.168.49.2" for the "addons-459729" container
	I1026 14:14:30.100583  846424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 14:14:30.116904  846424 cli_runner.go:164] Run: docker volume create addons-459729 --label name.minikube.sigs.k8s.io=addons-459729 --label created_by.minikube.sigs.k8s.io=true
	I1026 14:14:30.135222  846424 oci.go:103] Successfully created a docker volume addons-459729
	I1026 14:14:30.135299  846424 cli_runner.go:164] Run: docker run --rm --name addons-459729-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-459729 --entrypoint /usr/bin/test -v addons-459729:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 14:14:36.146492  846424 cli_runner.go:217] Completed: docker run --rm --name addons-459729-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-459729 --entrypoint /usr/bin/test -v addons-459729:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.011135666s)
	I1026 14:14:36.146530  846424 oci.go:107] Successfully prepared a docker volume addons-459729
	I1026 14:14:36.146583  846424 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:36.146616  846424 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 14:14:36.146686  846424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-459729:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 14:14:40.450984  846424 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-459729:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.304224683s)
	I1026 14:14:40.451018  846424 kic.go:203] duration metric: took 4.304399454s to extract preloaded images to volume ...
	W1026 14:14:40.451121  846424 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 14:14:40.451155  846424 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 14:14:40.451213  846424 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 14:14:40.510278  846424 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-459729 --name addons-459729 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-459729 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-459729 --network addons-459729 --ip 192.168.49.2 --volume addons-459729:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 14:14:40.765991  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Running}}
	I1026 14:14:40.784464  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:14:40.802012  846424 cli_runner.go:164] Run: docker exec addons-459729 stat /var/lib/dpkg/alternatives/iptables
	I1026 14:14:40.851940  846424 oci.go:144] the created container "addons-459729" has a running status.
	I1026 14:14:40.851973  846424 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa...
	I1026 14:14:40.949694  846424 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 14:14:40.978174  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:14:41.000243  846424 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 14:14:41.000276  846424 kic_runner.go:114] Args: [docker exec --privileged addons-459729 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 14:14:41.043571  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:14:41.069582  846424 machine.go:93] provisionDockerMachine start ...
	I1026 14:14:41.069796  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.093554  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:41.093778  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:41.093791  846424 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 14:14:41.243331  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-459729
	
	I1026 14:14:41.243363  846424 ubuntu.go:182] provisioning hostname "addons-459729"
	I1026 14:14:41.243419  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.261776  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:41.262051  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:41.262072  846424 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-459729 && echo "addons-459729" | sudo tee /etc/hostname
	I1026 14:14:41.414391  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-459729
	
	I1026 14:14:41.414497  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.433449  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:41.433812  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:41.433851  846424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-459729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-459729/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-459729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 14:14:41.575368  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 14:14:41.575416  846424 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 14:14:41.575444  846424 ubuntu.go:190] setting up certificates
	I1026 14:14:41.575464  846424 provision.go:84] configureAuth start
	I1026 14:14:41.575530  846424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-459729
	I1026 14:14:41.593069  846424 provision.go:143] copyHostCerts
	I1026 14:14:41.593211  846424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 14:14:41.593370  846424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 14:14:41.593473  846424 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 14:14:41.593572  846424 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.addons-459729 san=[127.0.0.1 192.168.49.2 addons-459729 localhost minikube]
	I1026 14:14:41.952749  846424 provision.go:177] copyRemoteCerts
	I1026 14:14:41.952809  846424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 14:14:41.952864  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:41.971059  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.071814  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 14:14:42.091550  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 14:14:42.109573  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 14:14:42.127661  846424 provision.go:87] duration metric: took 552.178827ms to configureAuth
	I1026 14:14:42.127694  846424 ubuntu.go:206] setting minikube options for container-runtime
	I1026 14:14:42.127910  846424 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:14:42.128035  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.145755  846424 main.go:141] libmachine: Using SSH client type: native
	I1026 14:14:42.145991  846424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33536 <nil> <nil>}
	I1026 14:14:42.146015  846424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 14:14:42.398484  846424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 14:14:42.398510  846424 machine.go:96] duration metric: took 1.328895029s to provisionDockerMachine
	I1026 14:14:42.398521  846424 client.go:171] duration metric: took 12.964130689s to LocalClient.Create
	I1026 14:14:42.398541  846424 start.go:167] duration metric: took 12.964216103s to libmachine.API.Create "addons-459729"
	I1026 14:14:42.398551  846424 start.go:293] postStartSetup for "addons-459729" (driver="docker")
	I1026 14:14:42.398565  846424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 14:14:42.398618  846424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 14:14:42.398665  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.416371  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.518463  846424 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 14:14:42.521931  846424 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 14:14:42.521963  846424 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 14:14:42.521977  846424 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 14:14:42.522046  846424 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 14:14:42.522073  846424 start.go:296] duration metric: took 123.514687ms for postStartSetup
	I1026 14:14:42.522380  846424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-459729
	I1026 14:14:42.540283  846424 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/config.json ...
	I1026 14:14:42.540575  846424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:14:42.540629  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.558249  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.655957  846424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 14:14:42.660462  846424 start.go:128] duration metric: took 13.22797972s to createHost
	I1026 14:14:42.660486  846424 start.go:83] releasing machines lock for "addons-459729", held for 13.228116528s
	I1026 14:14:42.660551  846424 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-459729
	I1026 14:14:42.677972  846424 ssh_runner.go:195] Run: cat /version.json
	I1026 14:14:42.678042  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.678103  846424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 14:14:42.678186  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:14:42.696981  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.697266  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:14:42.856351  846424 ssh_runner.go:195] Run: systemctl --version
	I1026 14:14:42.863288  846424 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 14:14:42.900301  846424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 14:14:42.905120  846424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 14:14:42.905196  846424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 14:14:42.932600  846424 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 14:14:42.932623  846424 start.go:495] detecting cgroup driver to use...
	I1026 14:14:42.932656  846424 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 14:14:42.932705  846424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 14:14:42.948987  846424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 14:14:42.961218  846424 docker.go:218] disabling cri-docker service (if available) ...
	I1026 14:14:42.961271  846424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 14:14:42.977976  846424 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 14:14:42.995853  846424 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 14:14:43.078675  846424 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 14:14:43.167078  846424 docker.go:234] disabling docker service ...
	I1026 14:14:43.167150  846424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 14:14:43.186433  846424 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 14:14:43.199219  846424 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 14:14:43.281310  846424 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 14:14:43.363611  846424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 14:14:43.376627  846424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 14:14:43.391082  846424 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 14:14:43.391147  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.401654  846424 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 14:14:43.401722  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.411314  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.420752  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.430053  846424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 14:14:43.438422  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.447584  846424 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.462065  846424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:14:43.471427  846424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 14:14:43.478920  846424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 14:14:43.486416  846424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:14:43.566863  846424 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 14:14:43.671842  846424 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 14:14:43.671918  846424 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 14:14:43.675998  846424 start.go:563] Will wait 60s for crictl version
	I1026 14:14:43.676061  846424 ssh_runner.go:195] Run: which crictl
	I1026 14:14:43.679709  846424 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 14:14:43.706317  846424 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 14:14:43.706420  846424 ssh_runner.go:195] Run: crio --version
	I1026 14:14:43.734316  846424 ssh_runner.go:195] Run: crio --version
	I1026 14:14:43.764384  846424 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 14:14:43.765785  846424 cli_runner.go:164] Run: docker network inspect addons-459729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 14:14:43.783001  846424 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 14:14:43.787207  846424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:14:43.797548  846424 kubeadm.go:883] updating cluster {Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 14:14:43.797721  846424 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:14:43.797793  846424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:14:43.832123  846424 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:14:43.832145  846424 crio.go:433] Images already preloaded, skipping extraction
	I1026 14:14:43.832214  846424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:14:43.858842  846424 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:14:43.858871  846424 cache_images.go:85] Images are preloaded, skipping loading
	I1026 14:14:43.858883  846424 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 14:14:43.859030  846424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-459729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 14:14:43.859110  846424 ssh_runner.go:195] Run: crio config
	I1026 14:14:43.904710  846424 cni.go:84] Creating CNI manager for ""
	I1026 14:14:43.904736  846424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:43.904762  846424 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 14:14:43.904789  846424 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-459729 NodeName:addons-459729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 14:14:43.904928  846424 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-459729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 14:14:43.904991  846424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 14:14:43.913572  846424 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 14:14:43.913638  846424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 14:14:43.921876  846424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 14:14:43.934931  846424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 14:14:43.950730  846424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1026 14:14:43.963901  846424 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 14:14:43.967671  846424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:14:43.977851  846424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:14:44.058772  846424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:14:44.083941  846424 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729 for IP: 192.168.49.2
	I1026 14:14:44.083989  846424 certs.go:195] generating shared ca certs ...
	I1026 14:14:44.084018  846424 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:44.084226  846424 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 14:14:44.387912  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt ...
	I1026 14:14:44.387946  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt: {Name:mk8933e3107ac3223c09abfcc2b23b2a267f80dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:44.388133  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key ...
	I1026 14:14:44.388149  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key: {Name:mk6b1973d9c275e0f32b5e6221cf09f2bcd1d12d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:44.388250  846424 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 14:14:45.246605  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt ...
	I1026 14:14:45.246640  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt: {Name:mkdb300b113fc66de4a4109eb2097856fa215e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.246821  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key ...
	I1026 14:14:45.246832  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key: {Name:mkaba3ad2bc7a1a50d30bd9bfd3aea7c19e5fda9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.246922  846424 certs.go:257] generating profile certs ...
	I1026 14:14:45.247013  846424 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.key
	I1026 14:14:45.247033  846424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt with IP's: []
	I1026 14:14:45.334595  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt ...
	I1026 14:14:45.334626  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: {Name:mkafadf8981207eceb9ebbe4962ff018f519fecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.334804  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.key ...
	I1026 14:14:45.334815  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.key: {Name:mka2fbae2418418d747b82adac0fb2b7f375ffa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.334888  846424 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1
	I1026 14:14:45.334908  846424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1026 14:14:45.666093  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1 ...
	I1026 14:14:45.666125  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1: {Name:mkb948c94234f3b4bc97a7b01df3ae78190037f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.666319  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1 ...
	I1026 14:14:45.666337  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1: {Name:mk3bf95757956aa10cef36d1b4e59b884575ea91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.666413  846424 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt.e8921df1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt
	I1026 14:14:45.666512  846424 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key.e8921df1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key
	I1026 14:14:45.666569  846424 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key
	I1026 14:14:45.666596  846424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt with IP's: []
	I1026 14:14:45.921156  846424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt ...
	I1026 14:14:45.921205  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt: {Name:mkbc119a7d5f48960c3f21d5f4d887a967005987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.921387  846424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key ...
	I1026 14:14:45.921401  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key: {Name:mk005d3953795c30c971b42e066689f23e94bbc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:14:45.921650  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 14:14:45.921691  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 14:14:45.921717  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 14:14:45.921738  846424 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 14:14:45.922419  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 14:14:45.941068  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 14:14:45.958551  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 14:14:45.976346  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 14:14:45.994052  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 14:14:46.011477  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 14:14:46.028955  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 14:14:46.046187  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 14:14:46.063408  846424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 14:14:46.082572  846424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 14:14:46.095043  846424 ssh_runner.go:195] Run: openssl version
	I1026 14:14:46.101206  846424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 14:14:46.112299  846424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:14:46.116268  846424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:14:46.116319  846424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:14:46.152435  846424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 14:14:46.161706  846424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 14:14:46.165576  846424 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 14:14:46.165626  846424 kubeadm.go:400] StartCluster: {Name:addons-459729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-459729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:14:46.165713  846424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:14:46.165765  846424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:14:46.194501  846424 cri.go:89] found id: ""
	I1026 14:14:46.194576  846424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 14:14:46.202715  846424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 14:14:46.211023  846424 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 14:14:46.211084  846424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 14:14:46.219223  846424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 14:14:46.219242  846424 kubeadm.go:157] found existing configuration files:
	
	I1026 14:14:46.219304  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 14:14:46.227401  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 14:14:46.227464  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 14:14:46.234983  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 14:14:46.242551  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 14:14:46.242605  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 14:14:46.249969  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 14:14:46.257567  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 14:14:46.257615  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 14:14:46.265426  846424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 14:14:46.273171  846424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 14:14:46.273236  846424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 14:14:46.280562  846424 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 14:14:46.343303  846424 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 14:14:46.403244  846424 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 14:14:56.860323  846424 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 14:14:56.860407  846424 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 14:14:56.860530  846424 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 14:14:56.860618  846424 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 14:14:56.860662  846424 kubeadm.go:318] OS: Linux
	I1026 14:14:56.860706  846424 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 14:14:56.860748  846424 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 14:14:56.860797  846424 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 14:14:56.860866  846424 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 14:14:56.860933  846424 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 14:14:56.861010  846424 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 14:14:56.861057  846424 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 14:14:56.861095  846424 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 14:14:56.861201  846424 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 14:14:56.861325  846424 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 14:14:56.861408  846424 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 14:14:56.861499  846424 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 14:14:56.863767  846424 out.go:252]   - Generating certificates and keys ...
	I1026 14:14:56.863843  846424 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 14:14:56.863905  846424 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 14:14:56.863967  846424 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 14:14:56.864073  846424 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 14:14:56.864145  846424 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 14:14:56.864216  846424 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 14:14:56.864284  846424 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 14:14:56.864408  846424 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-459729 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:14:56.864455  846424 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 14:14:56.864552  846424 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-459729 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 14:14:56.864612  846424 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 14:14:56.864666  846424 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 14:14:56.864721  846424 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 14:14:56.864809  846424 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 14:14:56.864880  846424 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 14:14:56.864955  846424 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 14:14:56.865011  846424 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 14:14:56.865071  846424 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 14:14:56.865154  846424 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 14:14:56.865256  846424 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 14:14:56.865342  846424 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 14:14:56.866657  846424 out.go:252]   - Booting up control plane ...
	I1026 14:14:56.866747  846424 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 14:14:56.866847  846424 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 14:14:56.866934  846424 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 14:14:56.867095  846424 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 14:14:56.867202  846424 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 14:14:56.867333  846424 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 14:14:56.867446  846424 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 14:14:56.867518  846424 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 14:14:56.867705  846424 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 14:14:56.867847  846424 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 14:14:56.867935  846424 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001058368s
	I1026 14:14:56.868063  846424 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 14:14:56.868199  846424 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1026 14:14:56.868310  846424 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 14:14:56.868408  846424 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 14:14:56.868533  846424 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.557698122s
	I1026 14:14:56.868636  846424 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.268793474s
	I1026 14:14:56.868740  846424 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001941586s
	I1026 14:14:56.868848  846424 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 14:14:56.868985  846424 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 14:14:56.869074  846424 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 14:14:56.869319  846424 kubeadm.go:318] [mark-control-plane] Marking the node addons-459729 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 14:14:56.869423  846424 kubeadm.go:318] [bootstrap-token] Using token: f6fn21.ali5nckn8rkh7x29
	I1026 14:14:56.871880  846424 out.go:252]   - Configuring RBAC rules ...
	I1026 14:14:56.871970  846424 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 14:14:56.872081  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 14:14:56.872291  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 14:14:56.872503  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 14:14:56.872682  846424 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 14:14:56.872826  846424 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 14:14:56.872987  846424 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 14:14:56.873058  846424 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 14:14:56.873120  846424 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 14:14:56.873133  846424 kubeadm.go:318] 
	I1026 14:14:56.873228  846424 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 14:14:56.873240  846424 kubeadm.go:318] 
	I1026 14:14:56.873354  846424 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 14:14:56.873363  846424 kubeadm.go:318] 
	I1026 14:14:56.873405  846424 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 14:14:56.873458  846424 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 14:14:56.873503  846424 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 14:14:56.873509  846424 kubeadm.go:318] 
	I1026 14:14:56.873555  846424 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 14:14:56.873560  846424 kubeadm.go:318] 
	I1026 14:14:56.873597  846424 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 14:14:56.873603  846424 kubeadm.go:318] 
	I1026 14:14:56.873643  846424 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 14:14:56.873707  846424 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 14:14:56.873765  846424 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 14:14:56.873770  846424 kubeadm.go:318] 
	I1026 14:14:56.873885  846424 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 14:14:56.873950  846424 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 14:14:56.873955  846424 kubeadm.go:318] 
	I1026 14:14:56.874020  846424 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token f6fn21.ali5nckn8rkh7x29 \
	I1026 14:14:56.874104  846424 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 14:14:56.874125  846424 kubeadm.go:318] 	--control-plane 
	I1026 14:14:56.874131  846424 kubeadm.go:318] 
	I1026 14:14:56.874231  846424 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 14:14:56.874245  846424 kubeadm.go:318] 
	I1026 14:14:56.874359  846424 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token f6fn21.ali5nckn8rkh7x29 \
	I1026 14:14:56.874513  846424 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 14:14:56.874526  846424 cni.go:84] Creating CNI manager for ""
	I1026 14:14:56.874533  846424 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:14:56.876103  846424 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 14:14:56.877647  846424 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 14:14:56.882227  846424 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 14:14:56.882247  846424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 14:14:56.895793  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 14:14:57.106713  846424 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 14:14:57.106824  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:57.106854  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-459729 minikube.k8s.io/updated_at=2025_10_26T14_14_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=addons-459729 minikube.k8s.io/primary=true
	I1026 14:14:57.117887  846424 ops.go:34] apiserver oom_adj: -16
	I1026 14:14:57.187931  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:57.688917  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:58.188959  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:58.688895  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:59.188658  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:14:59.688052  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:00.188849  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:00.687985  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:01.188637  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:01.688698  846424 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:15:01.754745  846424 kubeadm.go:1113] duration metric: took 4.647991318s to wait for elevateKubeSystemPrivileges
	I1026 14:15:01.754787  846424 kubeadm.go:402] duration metric: took 15.58916607s to StartCluster
	I1026 14:15:01.754836  846424 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:01.754978  846424 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:15:01.755482  846424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:01.755722  846424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 14:15:01.755738  846424 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:15:01.755806  846424 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 14:15:01.755939  846424 addons.go:69] Setting yakd=true in profile "addons-459729"
	I1026 14:15:01.755964  846424 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-459729"
	I1026 14:15:01.755989  846424 addons.go:69] Setting registry=true in profile "addons-459729"
	I1026 14:15:01.756000  846424 addons.go:69] Setting inspektor-gadget=true in profile "addons-459729"
	I1026 14:15:01.756006  846424 addons.go:238] Setting addon registry=true in "addons-459729"
	I1026 14:15:01.756016  846424 addons.go:238] Setting addon inspektor-gadget=true in "addons-459729"
	I1026 14:15:01.756040  846424 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:01.756049  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756052  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756034  846424 addons.go:69] Setting ingress=true in profile "addons-459729"
	I1026 14:15:01.756078  846424 addons.go:238] Setting addon ingress=true in "addons-459729"
	I1026 14:15:01.756055  846424 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-459729"
	I1026 14:15:01.756096  846424 addons.go:69] Setting registry-creds=true in profile "addons-459729"
	I1026 14:15:01.756104  846424 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-459729"
	I1026 14:15:01.756113  846424 addons.go:69] Setting default-storageclass=true in profile "addons-459729"
	I1026 14:15:01.756115  846424 addons.go:69] Setting storage-provisioner=true in profile "addons-459729"
	I1026 14:15:01.756130  846424 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-459729"
	I1026 14:15:01.756140  846424 addons.go:238] Setting addon storage-provisioner=true in "addons-459729"
	I1026 14:15:01.756147  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756152  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.755989  846424 addons.go:69] Setting ingress-dns=true in profile "addons-459729"
	I1026 14:15:01.756657  846424 addons.go:238] Setting addon ingress-dns=true in "addons-459729"
	I1026 14:15:01.756719  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756106  846424 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-459729"
	I1026 14:15:01.756889  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756910  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757017  846424 addons.go:69] Setting metrics-server=true in profile "addons-459729"
	I1026 14:15:01.757045  846424 addons.go:238] Setting addon metrics-server=true in "addons-459729"
	I1026 14:15:01.757082  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757139  846424 addons.go:69] Setting volcano=true in profile "addons-459729"
	I1026 14:15:01.757156  846424 addons.go:238] Setting addon volcano=true in "addons-459729"
	I1026 14:15:01.757203  846424 addons.go:69] Setting gcp-auth=true in profile "addons-459729"
	I1026 14:15:01.757206  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757222  846424 mustload.go:65] Loading cluster: addons-459729
	I1026 14:15:01.757433  846424 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:01.757547  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.757549  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.757698  846424 addons.go:69] Setting volumesnapshots=true in profile "addons-459729"
	I1026 14:15:01.757716  846424 addons.go:238] Setting addon volumesnapshots=true in "addons-459729"
	I1026 14:15:01.757734  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.757739  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.757840  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.758388  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.759608  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.755980  846424 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-459729"
	I1026 14:15:01.760402  846424 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-459729"
	I1026 14:15:01.760438  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756108  846424 addons.go:238] Setting addon registry-creds=true in "addons-459729"
	I1026 14:15:01.760907  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.761372  846424 out.go:179] * Verifying Kubernetes components...
	I1026 14:15:01.761867  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.761926  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.762184  846424 addons.go:69] Setting cloud-spanner=true in profile "addons-459729"
	I1026 14:15:01.762211  846424 addons.go:238] Setting addon cloud-spanner=true in "addons-459729"
	I1026 14:15:01.762241  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.755981  846424 addons.go:238] Setting addon yakd=true in "addons-459729"
	I1026 14:15:01.762447  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.756085  846424 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-459729"
	I1026 14:15:01.762581  846424 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-459729"
	I1026 14:15:01.763509  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763699  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763743  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763750  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.763779  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.764110  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.764900  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.765241  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.765248  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.768115  846424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:01.824394  846424 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 14:15:01.826325  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 14:15:01.826360  846424 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 14:15:01.826434  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.827641  846424 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-459729"
	I1026 14:15:01.827777  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.828346  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	W1026 14:15:01.835680  846424 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 14:15:01.838243  846424 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 14:15:01.838670  846424 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1026 14:15:01.838918  846424 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 14:15:01.839837  846424 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 14:15:01.840040  846424 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 14:15:01.840135  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.840788  846424 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:01.840810  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 14:15:01.840875  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.841970  846424 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:01.843548  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 14:15:01.842268  846424 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 14:15:01.843080  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 14:15:01.843369  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.845123  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.846927  846424 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:01.846947  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 14:15:01.847004  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.856949  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 14:15:01.856978  846424 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 14:15:01.857056  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.860776  846424 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 14:15:01.867308  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 14:15:01.867357  846424 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 14:15:01.868311  846424 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:01.868329  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 14:15:01.868399  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.868677  846424 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 14:15:01.871040  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 14:15:01.871111  846424 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 14:15:01.872855  846424 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 14:15:01.872878  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 14:15:01.872949  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.873125  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 14:15:01.873516  846424 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:01.873535  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 14:15:01.873835  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.877387  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 14:15:01.879560  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 14:15:01.882349  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 14:15:01.883556  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 14:15:01.892467  846424 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 14:15:01.893569  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 14:15:01.893595  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 14:15:01.893667  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.905909  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.906029  846424 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 14:15:01.908203  846424 out.go:179]   - Using image docker.io/busybox:stable
	I1026 14:15:01.912234  846424 addons.go:238] Setting addon default-storageclass=true in "addons-459729"
	I1026 14:15:01.913190  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:01.913688  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:01.914316  846424 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:01.914397  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 14:15:01.914467  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.925284  846424 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 14:15:01.929356  846424 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 14:15:01.930036  846424 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:01.930058  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 14:15:01.930129  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.936261  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.937883  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.938657  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:01.940626  846424 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 14:15:01.941914  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 14:15:01.942000  846424 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 14:15:01.942101  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.941961  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 14:15:01.945791  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.945864  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.948625  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:01.949928  846424 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:01.949982  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 14:15:01.950059  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.955204  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.970925  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.975351  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.976053  846424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:15:01.978630  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:01.991435  846424 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:01.991462  846424 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 14:15:01.991528  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:01.991780  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.016851  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	W1026 14:15:02.019263  846424 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 14:15:02.019305  846424 retry.go:31] will retry after 217.923962ms: ssh: handshake failed: EOF
	I1026 14:15:02.023195  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.032276  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.035819  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.039781  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:02.125207  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:15:02.136547  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:15:02.141012  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 14:15:02.141040  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 14:15:02.149864  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:15:02.150107  846424 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 14:15:02.150133  846424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 14:15:02.153611  846424 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 14:15:02.153638  846424 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 14:15:02.155650  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 14:15:02.155673  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 14:15:02.157330  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:15:02.160138  846424 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:02.160154  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 14:15:02.168525  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 14:15:02.168554  846424 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 14:15:02.188885  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:15:02.190931  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 14:15:02.190953  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 14:15:02.191058  846424 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 14:15:02.191119  846424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 14:15:02.195824  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 14:15:02.195847  846424 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 14:15:02.196657  846424 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:02.196677  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 14:15:02.197637  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 14:15:02.200528  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:15:02.207552  846424 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:02.207579  846424 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 14:15:02.232493  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 14:15:02.235058  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 14:15:02.235104  846424 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 14:15:02.247417  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:15:02.247703  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 14:15:02.247731  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 14:15:02.254701  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:02.261459  846424 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 14:15:02.261489  846424 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 14:15:02.297299  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 14:15:02.297343  846424 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 14:15:02.298507  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:15:02.314881  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 14:15:02.314916  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 14:15:02.328700  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 14:15:02.328736  846424 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 14:15:02.358580  846424 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 14:15:02.359543  846424 node_ready.go:35] waiting up to 6m0s for node "addons-459729" to be "Ready" ...
	I1026 14:15:02.371344  846424 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:02.371372  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 14:15:02.404369  846424 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 14:15:02.404399  846424 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 14:15:02.424439  846424 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:02.424528  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 14:15:02.442236  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:15:02.460571  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 14:15:02.460657  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 14:15:02.502911  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:15:02.534388  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 14:15:02.534419  846424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 14:15:02.545901  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:15:02.614728  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 14:15:02.614838  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 14:15:02.667295  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 14:15:02.667523  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 14:15:02.707698  846424 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:15:02.707786  846424 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 14:15:02.747588  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:15:02.873331  846424 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-459729" context rescaled to 1 replicas
	I1026 14:15:03.502753  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.302179853s)
	I1026 14:15:03.502793  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.270259468s)
	I1026 14:15:03.502801  846424 addons.go:479] Verifying addon ingress=true in "addons-459729"
	I1026 14:15:03.503063  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.255611006s)
	I1026 14:15:03.503098  846424 addons.go:479] Verifying addon metrics-server=true in "addons-459729"
	I1026 14:15:03.503181  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.248430064s)
	W1026 14:15:03.503268  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:03.503289  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.204745558s)
	I1026 14:15:03.503322  846424 addons.go:479] Verifying addon registry=true in "addons-459729"
	I1026 14:15:03.503380  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.061046188s)
	I1026 14:15:03.503295  846424 retry.go:31] will retry after 148.010934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:03.504599  846424 out.go:179] * Verifying registry addon...
	I1026 14:15:03.504631  846424 out.go:179] * Verifying ingress addon...
	I1026 14:15:03.507305  846424 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-459729 service yakd-dashboard -n yakd-dashboard
	
	I1026 14:15:03.508086  846424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 14:15:03.508142  846424 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 14:15:03.511447  846424 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:15:03.511469  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:03.511568  846424 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 14:15:03.511589  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:03.651987  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:03.931773  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.428808877s)
	W1026 14:15:03.931834  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:03.931861  846424 retry.go:31] will retry after 202.223495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:15:03.931929  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.386004332s)
	I1026 14:15:03.932280  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.184640366s)
	I1026 14:15:03.932321  846424 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-459729"
	I1026 14:15:03.934515  846424 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 14:15:03.936685  846424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 14:15:03.939543  846424 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:15:03.939568  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:04.011803  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:04.012023  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:04.135249  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1026 14:15:04.302639  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:04.302684  846424 retry.go:31] will retry after 256.294826ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 14:15:04.362538  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:04.440917  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:04.541665  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:04.541710  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:04.559817  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:04.939696  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:05.011299  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:05.011458  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:05.440447  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:05.541188  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:05.541273  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:05.940840  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:06.011969  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:06.012243  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:06.362969  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:06.440395  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:06.540977  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:06.541042  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:06.641882  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.506580998s)
	I1026 14:15:06.641952  846424 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.082094284s)
	W1026 14:15:06.641987  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:06.642010  846424 retry.go:31] will retry after 346.725146ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:06.940606  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:06.989704  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:07.011088  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:07.011280  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:07.440961  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:07.542090  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:07.542360  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:07.558417  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:07.558457  846424 retry.go:31] will retry after 465.781456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:07.940090  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:08.011851  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:08.011921  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:08.025028  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:15:08.363131  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:08.439805  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:08.511865  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:08.512205  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:08.582561  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:08.582599  846424 retry.go:31] will retry after 1.449023391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:08.940711  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:09.011541  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:09.011689  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:09.440927  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:09.454842  846424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 14:15:09.454915  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:09.474050  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:09.542099  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:09.542269  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:09.586209  846424 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 14:15:09.599936  846424 addons.go:238] Setting addon gcp-auth=true in "addons-459729"
	I1026 14:15:09.600004  846424 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:15:09.600518  846424 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:15:09.618865  846424 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 14:15:09.618925  846424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:15:09.637719  846424 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:15:09.738033  846424 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:15:09.739603  846424 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 14:15:09.741100  846424 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 14:15:09.741126  846424 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 14:15:09.755471  846424 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 14:15:09.755502  846424 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 14:15:09.769570  846424 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:09.769600  846424 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 14:15:09.783135  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:15:09.940438  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:10.011447  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:10.011724  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:10.032590  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:10.107618  846424 addons.go:479] Verifying addon gcp-auth=true in "addons-459729"
	I1026 14:15:10.109476  846424 out.go:179] * Verifying gcp-auth addon...
	I1026 14:15:10.112303  846424 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 14:15:10.115588  846424 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 14:15:10.115614  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:10.441825  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:10.511906  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:10.511972  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 14:15:10.611392  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:10.611426  846424 retry.go:31] will retry after 1.80430156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:10.614915  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:10.862859  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:10.939690  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:11.011633  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:11.011841  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:11.116133  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:11.440853  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:11.511600  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:11.511833  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:11.615829  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:11.940803  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:12.011795  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:12.012045  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:12.115725  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:12.416588  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:12.440181  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:12.511462  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:12.511639  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:12.615801  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:12.940325  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:15:12.964755  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:12.964784  846424 retry.go:31] will retry after 1.780244556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:13.011987  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:13.012113  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:13.116258  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:13.363321  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:13.440372  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:13.511266  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:13.511405  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:13.615430  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:13.940076  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:14.012062  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:14.012116  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:14.116253  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:14.440242  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:14.512057  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:14.512338  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:14.615992  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:14.746241  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:14.940674  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:15.011505  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:15.011640  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:15.116328  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:15.316951  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:15.316989  846424 retry.go:31] will retry after 5.440492782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:15.440200  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:15.511134  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:15.511275  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:15.616267  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:15.862887  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:15.939913  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:16.011983  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:16.012134  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:16.116436  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:16.440198  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:16.512498  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:16.512684  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:16.615786  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:16.940627  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:17.011646  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:17.011893  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:17.116034  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:17.440400  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:17.511242  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:17.511408  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:17.616515  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:17.940364  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:18.011130  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:18.011253  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:18.116015  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:18.363065  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:18.440278  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:18.512057  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:18.512257  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:18.616302  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:18.940378  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:19.011296  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:19.011355  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:19.116473  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:19.440955  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:19.511663  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:19.511896  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:19.616320  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:19.940560  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:20.011520  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:20.011797  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:20.115557  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:20.440901  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:20.511988  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:20.512031  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:20.615783  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:20.758096  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:15:20.862647  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:20.940915  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:21.012207  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:21.012289  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:21.117067  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:21.313675  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:21.313707  846424 retry.go:31] will retry after 8.91122247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:21.440656  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:21.511553  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:21.511689  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:21.615625  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:21.940584  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:22.011440  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:22.011654  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:22.115655  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:22.440488  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:22.511406  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:22.511550  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:22.615671  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:22.940358  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:23.011074  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:23.011174  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:23.116377  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:23.363318  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:23.440377  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:23.511345  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:23.511560  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:23.615384  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:23.940379  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:24.011307  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:24.011561  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:24.116091  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:24.440587  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:24.511418  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:24.511646  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:24.615811  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:24.939855  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:25.011683  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:25.011782  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:25.116357  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:25.440984  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:25.511873  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:25.511903  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:25.615664  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:25.862365  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:25.940295  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:26.011232  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:26.011407  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:26.115402  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:26.440446  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:26.511527  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:26.511752  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:26.615540  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:26.940929  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:27.042156  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:27.042322  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:27.142622  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:27.440313  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:27.511616  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:27.511736  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:27.615910  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:27.863296  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:27.940352  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:28.011545  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:28.011563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:28.115289  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:28.440439  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:28.511542  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:28.511612  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:28.615532  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:28.940483  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:29.011417  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:29.011572  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:29.115862  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:29.440528  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:29.511762  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:29.511961  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:29.615472  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:29.863626  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:29.940511  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:30.011347  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:30.011526  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:30.115553  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:30.225751  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:30.440732  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:30.511761  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:30.511809  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:30.615389  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:30.801581  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:30.801612  846424 retry.go:31] will retry after 13.384924225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:30.940459  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:31.011507  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:31.011625  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:31.115351  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:31.440233  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:31.510980  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:31.511100  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:31.616243  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:31.940463  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:32.011513  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:32.011678  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:32.115622  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:32.362628  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:32.440664  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:32.511680  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:32.511737  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:32.615569  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:32.940541  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:33.011546  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:33.011664  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:33.115806  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:33.440064  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:33.512047  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:33.512126  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:33.615997  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:33.939890  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:34.012203  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:34.012266  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:34.116285  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:34.362894  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:34.439794  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:34.511885  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:34.511888  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:34.615635  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:34.940637  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:35.011802  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:35.012036  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:35.116073  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:35.441237  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:35.511039  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:35.511313  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:35.616525  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:35.940536  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:36.011591  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:36.011913  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:36.115551  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:36.440489  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:36.511389  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:36.511596  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:36.615558  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:36.862289  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:36.940212  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:37.011145  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:37.011314  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:37.116406  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:37.440773  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:37.511657  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:37.511746  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:37.615698  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:37.940654  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:38.011611  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:38.011630  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:38.115442  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:38.440259  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:38.511046  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:38.511100  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:38.616066  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:38.863142  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:38.940023  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:39.012065  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:39.012130  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:39.116011  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:39.439627  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:39.511481  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:39.511553  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:39.615371  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:39.940298  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:40.011243  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:40.011419  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:40.115389  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:40.440352  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:40.511019  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:40.511307  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:40.616092  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:40.939667  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:41.011746  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:41.011778  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:41.115465  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:41.363466  846424 node_ready.go:57] node "addons-459729" has "Ready":"False" status (will retry)
	I1026 14:15:41.440572  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:41.511456  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:41.511511  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:41.615524  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:41.940641  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:42.011604  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:42.011718  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:42.115753  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:42.440553  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:42.511790  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:42.512024  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:42.615996  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:42.940386  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:43.011106  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:43.011220  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:43.116069  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:43.363260  846424 node_ready.go:49] node "addons-459729" is "Ready"
	I1026 14:15:43.363297  846424 node_ready.go:38] duration metric: took 41.003701767s for node "addons-459729" to be "Ready" ...
	I1026 14:15:43.363317  846424 api_server.go:52] waiting for apiserver process to appear ...
	I1026 14:15:43.363400  846424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:15:43.381711  846424 api_server.go:72] duration metric: took 41.62593283s to wait for apiserver process to appear ...
	I1026 14:15:43.381745  846424 api_server.go:88] waiting for apiserver healthz status ...
	I1026 14:15:43.381771  846424 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 14:15:43.386270  846424 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 14:15:43.387310  846424 api_server.go:141] control plane version: v1.34.1
	I1026 14:15:43.387346  846424 api_server.go:131] duration metric: took 5.591629ms to wait for apiserver health ...
	I1026 14:15:43.387357  846424 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 14:15:43.390642  846424 system_pods.go:59] 20 kube-system pods found
	I1026 14:15:43.390691  846424 system_pods.go:61] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:43.390702  846424 system_pods.go:61] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:43.390711  846424 system_pods.go:61] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending
	I1026 14:15:43.390716  846424 system_pods.go:61] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending
	I1026 14:15:43.390720  846424 system_pods.go:61] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending
	I1026 14:15:43.390723  846424 system_pods.go:61] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:43.390726  846424 system_pods.go:61] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:43.390729  846424 system_pods.go:61] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:43.390732  846424 system_pods.go:61] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:43.390742  846424 system_pods.go:61] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending
	I1026 14:15:43.390745  846424 system_pods.go:61] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:43.390751  846424 system_pods.go:61] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:43.390756  846424 system_pods.go:61] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:43.390762  846424 system_pods.go:61] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending
	I1026 14:15:43.390784  846424 system_pods.go:61] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:43.390790  846424 system_pods.go:61] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:43.390795  846424 system_pods.go:61] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending
	I1026 14:15:43.390799  846424 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending
	I1026 14:15:43.390802  846424 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending
	I1026 14:15:43.390807  846424 system_pods.go:61] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:43.390818  846424 system_pods.go:74] duration metric: took 3.45377ms to wait for pod list to return data ...
	I1026 14:15:43.390829  846424 default_sa.go:34] waiting for default service account to be created ...
	I1026 14:15:43.394537  846424 default_sa.go:45] found service account: "default"
	I1026 14:15:43.394566  846424 default_sa.go:55] duration metric: took 3.728908ms for default service account to be created ...
	I1026 14:15:43.394579  846424 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 14:15:43.398295  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:43.398331  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:43.398340  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:43.398348  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending
	I1026 14:15:43.398354  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending
	I1026 14:15:43.398359  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending
	I1026 14:15:43.398364  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:43.398371  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:43.398377  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:43.398385  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:43.398396  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:43.398405  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:43.398412  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:43.398423  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:43.398432  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending
	I1026 14:15:43.398441  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:43.398452  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:43.398460  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending
	I1026 14:15:43.398466  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending
	I1026 14:15:43.398474  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending
	I1026 14:15:43.398481  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:43.398503  846424 retry.go:31] will retry after 285.578303ms: missing components: kube-dns
	I1026 14:15:43.439988  846424 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:15:43.440011  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:43.511891  846424 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:15:43.511923  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:43.512089  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:43.617305  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:43.720800  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:43.720851  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:43.720871  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:43.720883  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:15:43.720891  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:15:43.720909  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:15:43.720924  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:43.720930  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:43.720951  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:43.720962  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:43.720971  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:43.720984  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:43.720991  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:43.721001  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:43.721016  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:15:43.721023  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:43.721031  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:43.721042  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:15:43.721056  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:43.721066  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:43.721075  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:43.721098  846424 retry.go:31] will retry after 329.971946ms: missing components: kube-dns
	I1026 14:15:43.942121  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:44.012262  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:44.012376  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:44.056065  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:44.056108  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:44.056119  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:15:44.056129  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:15:44.056139  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:15:44.056147  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:15:44.056153  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:44.056171  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:44.056179  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:44.056184  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:44.056193  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:44.056202  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:44.056209  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:44.056217  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:44.056229  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:15:44.056239  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:44.056251  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:44.056261  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:15:44.056270  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.056276  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.056281  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 14:15:44.056300  846424 retry.go:31] will retry after 468.560484ms: missing components: kube-dns
	I1026 14:15:44.117136  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:44.187375  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:15:44.441427  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:44.511423  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:44.511459  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:44.530251  846424 system_pods.go:86] 20 kube-system pods found
	I1026 14:15:44.530287  846424 system_pods.go:89] "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:15:44.530295  846424 system_pods.go:89] "coredns-66bc5c9577-58kmh" [5f6dbec0-d423-40de-b8d5-a900bc1f5851] Running
	I1026 14:15:44.530306  846424 system_pods.go:89] "csi-hostpath-attacher-0" [eab2876d-7674-4188-8967-19945573776e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 14:15:44.530314  846424 system_pods.go:89] "csi-hostpath-resizer-0" [abb5e910-471c-42c2-ae26-54af2fb0e618] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 14:15:44.530323  846424 system_pods.go:89] "csi-hostpathplugin-86x7s" [a3788919-a77b-413f-a55b-c6a616ccb202] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 14:15:44.530329  846424 system_pods.go:89] "etcd-addons-459729" [ffa30eb3-3fdb-4184-bb14-f06554bd4979] Running
	I1026 14:15:44.530334  846424 system_pods.go:89] "kindnet-qskcd" [cf0b58e9-eade-47c7-840d-1de1857e53f1] Running
	I1026 14:15:44.530344  846424 system_pods.go:89] "kube-apiserver-addons-459729" [9ab803e5-033d-4f89-8aae-9f6ccc56ea17] Running
	I1026 14:15:44.530350  846424 system_pods.go:89] "kube-controller-manager-addons-459729" [579e4b55-312d-49a7-bd86-7d65e8efde23] Running
	I1026 14:15:44.530361  846424 system_pods.go:89] "kube-ingress-dns-minikube" [238ae152-8a88-4041-abdd-bf5aacdc6f1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:15:44.530366  846424 system_pods.go:89] "kube-proxy-2f7sr" [8ea92d4a-c60f-40db-ab7a-8772c201060f] Running
	I1026 14:15:44.530376  846424 system_pods.go:89] "kube-scheduler-addons-459729" [f7a61f82-6ea9-4993-b093-a03245db6ed6] Running
	I1026 14:15:44.530383  846424 system_pods.go:89] "metrics-server-85b7d694d7-g2nwm" [ea0a025f-f342-49d8-89cc-a9bd82a08b87] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:15:44.530396  846424 system_pods.go:89] "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:15:44.530415  846424 system_pods.go:89] "registry-6b586f9694-ds6k9" [14709e0b-ba9d-4eb0-b79e-a8106cba342e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:15:44.530428  846424 system_pods.go:89] "registry-creds-764b6fb674-dk4lc" [11a2adc0-f603-426f-af30-919a48eee4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:15:44.530438  846424 system_pods.go:89] "registry-proxy-cs2k2" [cecd1865-e35b-4581-8aaf-358948bc244c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:15:44.530446  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d9lzl" [673a7351-7a17-4a94-b2df-c246a1fd5519] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.530456  846424 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wrh9q" [66fdcdd8-8b70-496f-8b43-bf5dc2c1cb1a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:15:44.530462  846424 system_pods.go:89] "storage-provisioner" [01091c73-f5b0-4c51-ad56-fdc2723f09b2] Running
	I1026 14:15:44.530472  846424 system_pods.go:126] duration metric: took 1.135885614s to wait for k8s-apps to be running ...
	I1026 14:15:44.530482  846424 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 14:15:44.530536  846424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:15:44.616031  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:15:44.908118  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:44.908188  846424 retry.go:31] will retry after 19.716620035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:15:44.908207  846424 system_svc.go:56] duration metric: took 377.714352ms WaitForService to wait for kubelet
	I1026 14:15:44.908230  846424 kubeadm.go:586] duration metric: took 43.152458642s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:15:44.908250  846424 node_conditions.go:102] verifying NodePressure condition ...
	I1026 14:15:44.911337  846424 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 14:15:44.911364  846424 node_conditions.go:123] node cpu capacity is 8
	I1026 14:15:44.911397  846424 node_conditions.go:105] duration metric: took 3.140307ms to run NodePressure ...
	I1026 14:15:44.911413  846424 start.go:241] waiting for startup goroutines ...
	I1026 14:15:44.940945  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:45.011805  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:45.011886  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:45.116285  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:45.440843  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:45.513224  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:45.513412  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:45.616570  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:45.942361  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:46.012675  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:46.013528  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:46.117474  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:46.441947  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:46.512341  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:46.512535  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:46.616794  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:46.940470  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:47.011869  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:47.011931  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:47.116563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:47.440841  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:47.512115  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:47.512220  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:47.616287  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:47.941835  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:48.012250  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:48.012341  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:48.116362  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:48.441006  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:48.512345  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:48.512356  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:48.616602  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:48.940607  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:49.011849  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.012016  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:49.116379  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:49.440767  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:49.511952  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:49.511976  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:49.616460  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:49.941743  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.012219  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.012379  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.116868  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:50.441466  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:50.513107  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:50.515804  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:50.616823  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:50.941030  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:51.012636  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.012725  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.115494  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:51.441781  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:51.512071  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:51.512308  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:51.616593  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:51.940965  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.012409  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.012488  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.115563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:52.443333  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:52.511231  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:52.511257  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:52.616100  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:52.940805  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.012226  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.012332  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.116967  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:53.440270  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:53.511122  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:53.511202  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:53.615637  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:53.940600  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:54.011614  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.011714  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.116924  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.441210  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:54.512865  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:54.513048  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:54.617005  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:54.940859  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.012113  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.012224  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:55.116663  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.441153  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:55.512196  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:55.512413  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:55.616687  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:55.940578  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:56.011673  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.011694  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.115735  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.440993  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:56.512523  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:56.512651  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:56.615678  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:56.940579  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.041197  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.041223  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:57.116077  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.441079  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:57.541533  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:57.541819  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:57.615870  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:57.940598  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.011563  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.011563  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.115144  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.441032  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:58.511836  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:58.511904  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:58.616428  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:58.941373  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.012293  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.012340  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.115780  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:59.440337  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:15:59.511639  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:15:59.511739  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:15:59.616062  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:15:59.940947  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:00.012642  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.012851  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.116169  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.441026  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:00.512261  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:00.512331  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:00.616120  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:00.941108  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.012439  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.012548  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.115880  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.440399  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:01.511194  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:01.511239  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:01.616484  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:01.940063  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:02.012035  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.012152  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.116524  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:02.440114  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:02.511694  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:02.511958  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:02.616287  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:02.940613  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.011613  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.011834  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.115994  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.440884  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:03.541763  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:03.541805  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:03.615582  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:03.939875  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.012997  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.012997  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.116050  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.440720  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:04.541348  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:04.541372  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:04.616022  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:04.625105  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:04.940631  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:05.011348  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.011465  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.116030  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:05.176887  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:05.176928  846424 retry.go:31] will retry after 26.54487401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:05.441807  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:05.511995  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:05.512004  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:05.617230  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:05.944172  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.012901  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.014375  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.116181  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.473520  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:06.678507  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:06.678878  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:06.678919  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:06.943847  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:07.014324  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.015611  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.115649  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.440832  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:07.512429  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:07.512442  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:07.616798  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:07.941418  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:08.042703  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.042726  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.116002  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.440870  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:08.512242  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:08.512318  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:08.617191  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:08.940574  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.011290  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.011501  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.116370  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.441256  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:09.512541  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:09.512777  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:09.615743  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:09.940529  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:10.017652  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.017858  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.151413  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.551948  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:10.552021  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:10.552037  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:10.615849  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:10.940842  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.012102  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.012263  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.116194  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.440707  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:11.511926  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:11.511992  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:11.616604  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:11.959290  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:12.081235  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.081274  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.243116  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.442118  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:12.542025  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:12.542035  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:12.615889  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:12.940336  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.011966  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.012041  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.116222  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.441408  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:13.511717  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:13.511795  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:13.616459  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:13.941451  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:14.011632  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:14.011677  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.115643  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.440266  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:14.512541  846424 kapi.go:107] duration metric: took 1m11.004457602s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 14:16:14.512727  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:14.616135  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:14.941020  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.011868  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.116053  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.441317  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:15.512641  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:15.616897  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:15.940327  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:16.012834  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.116231  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:16.554332  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:16.554383  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:16.702636  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:16.941451  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.042536  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.115437  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.440730  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:17.512822  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:17.615703  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:17.940235  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:18.011654  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.115582  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.442174  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:18.512192  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:18.615605  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:18.940832  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.012192  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.116052  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:19.445141  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:19.512920  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:19.616513  846424 kapi.go:107] duration metric: took 1m9.504207447s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 14:16:19.618361  846424 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-459729 cluster.
	I1026 14:16:19.619952  846424 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 14:16:19.621190  846424 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 14:16:19.941420  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.012984  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.467066  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:20.512438  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:20.941556  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:21.011657  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.440326  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:21.512699  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:21.940391  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.013074  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.441421  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:22.512726  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:22.941511  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.011427  846424 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:23.441692  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:23.541961  846424 kapi.go:107] duration metric: took 1m20.033815029s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 14:16:23.939860  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.441033  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:24.940666  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.441106  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:25.949894  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.440937  846424 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:26.940765  846424 kapi.go:107] duration metric: took 1m23.004082526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 14:16:31.723355  846424 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 14:16:32.268610  846424 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 14:16:32.268728  846424 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 14:16:32.270314  846424 out.go:179] * Enabled addons: storage-provisioner, ingress-dns, amd-gpu-device-plugin, registry-creds, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1026 14:16:32.271316  846424 addons.go:514] duration metric: took 1m30.515515484s for enable addons: enabled=[storage-provisioner ingress-dns amd-gpu-device-plugin registry-creds nvidia-device-plugin cloud-spanner metrics-server yakd default-storageclass storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1026 14:16:32.271351  846424 start.go:246] waiting for cluster config update ...
	I1026 14:16:32.271372  846424 start.go:255] writing updated cluster config ...
	I1026 14:16:32.271625  846424 ssh_runner.go:195] Run: rm -f paused
	I1026 14:16:32.275601  846424 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:16:32.278988  846424 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-58kmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.283800  846424 pod_ready.go:94] pod "coredns-66bc5c9577-58kmh" is "Ready"
	I1026 14:16:32.283832  846424 pod_ready.go:86] duration metric: took 4.822784ms for pod "coredns-66bc5c9577-58kmh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.285751  846424 pod_ready.go:83] waiting for pod "etcd-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.289426  846424 pod_ready.go:94] pod "etcd-addons-459729" is "Ready"
	I1026 14:16:32.289447  846424 pod_ready.go:86] duration metric: took 3.67723ms for pod "etcd-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.291471  846424 pod_ready.go:83] waiting for pod "kube-apiserver-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.295066  846424 pod_ready.go:94] pod "kube-apiserver-addons-459729" is "Ready"
	I1026 14:16:32.295090  846424 pod_ready.go:86] duration metric: took 3.601221ms for pod "kube-apiserver-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.297016  846424 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.680183  846424 pod_ready.go:94] pod "kube-controller-manager-addons-459729" is "Ready"
	I1026 14:16:32.680220  846424 pod_ready.go:86] duration metric: took 383.185277ms for pod "kube-controller-manager-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:32.879387  846424 pod_ready.go:83] waiting for pod "kube-proxy-2f7sr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.279767  846424 pod_ready.go:94] pod "kube-proxy-2f7sr" is "Ready"
	I1026 14:16:33.279836  846424 pod_ready.go:86] duration metric: took 400.42041ms for pod "kube-proxy-2f7sr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.480448  846424 pod_ready.go:83] waiting for pod "kube-scheduler-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.880276  846424 pod_ready.go:94] pod "kube-scheduler-addons-459729" is "Ready"
	I1026 14:16:33.880305  846424 pod_ready.go:86] duration metric: took 399.829511ms for pod "kube-scheduler-addons-459729" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:16:33.880320  846424 pod_ready.go:40] duration metric: took 1.604687476s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:16:33.928054  846424 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 14:16:33.930783  846424 out.go:179] * Done! kubectl is now configured to use "addons-459729" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 14:16:26 addons-459729 crio[770]: time="2025-10-26T14:16:26.238905751Z" level=info msg="Starting container: 19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042" id=6a1ba477-64cf-4d34-9a77-68657172473c name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 14:16:26 addons-459729 crio[770]: time="2025-10-26T14:16:26.242127321Z" level=info msg="Started container" PID=6272 containerID=19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042 description=kube-system/csi-hostpathplugin-86x7s/csi-snapshotter id=6a1ba477-64cf-4d34-9a77-68657172473c name=/runtime.v1.RuntimeService/StartContainer sandboxID=76ed6035570f7af08975dc0e4ff37379439853b8630edbc0d0ee2efa41c87541
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.763099264Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2ce41ea8-1bf3-4dd0-ab94-05be707adf05 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.763234012Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.769657956Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4df2b4b18d1176bee7f30e0f2f4a136670c36675d487d11bff26b7ae62a09705 UID:34ab5631-8a88-449f-95bb-06d39c99c9a5 NetNS:/var/run/netns/0de8dd3c-8bb9-4a03-af0d-fbffeac6f903 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128880}] Aliases:map[]}"
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.769690539Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.779590282Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4df2b4b18d1176bee7f30e0f2f4a136670c36675d487d11bff26b7ae62a09705 UID:34ab5631-8a88-449f-95bb-06d39c99c9a5 NetNS:/var/run/netns/0de8dd3c-8bb9-4a03-af0d-fbffeac6f903 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128880}] Aliases:map[]}"
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.779723782Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.780585879Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.781353167Z" level=info msg="Ran pod sandbox 4df2b4b18d1176bee7f30e0f2f4a136670c36675d487d11bff26b7ae62a09705 with infra container: default/busybox/POD" id=2ce41ea8-1bf3-4dd0-ab94-05be707adf05 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.782627274Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9927f5da-24e3-4bcd-8f51-a8d6e2668c11 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.782755872Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9927f5da-24e3-4bcd-8f51-a8d6e2668c11 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.782790654Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9927f5da-24e3-4bcd-8f51-a8d6e2668c11 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.783370577Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ae66ff81-022e-46fc-87f2-f86f5c7c3967 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:16:34 addons-459729 crio[770]: time="2025-10-26T14:16:34.784720725Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.412430274Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=ae66ff81-022e-46fc-87f2-f86f5c7c3967 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.413126258Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=21a1cab7-c09b-4fa6-bffd-aef27db20068 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.414625405Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=caf4a4a5-1d8b-4ab0-b53d-673563f44975 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.4211311Z" level=info msg="Creating container: default/busybox/busybox" id=c0e2204c-6fbe-454f-9f81-06436bb2b3bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.4212983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.42753155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.42798146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.466693321Z" level=info msg="Created container 27b70ccf2a2bc70ed1ba8f6188eaa9754ea4c2fb532d17080ac29e551618a683: default/busybox/busybox" id=c0e2204c-6fbe-454f-9f81-06436bb2b3bb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.467465278Z" level=info msg="Starting container: 27b70ccf2a2bc70ed1ba8f6188eaa9754ea4c2fb532d17080ac29e551618a683" id=8c512709-0f3f-4b2d-a1de-2aff9c674e12 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 14:16:35 addons-459729 crio[770]: time="2025-10-26T14:16:35.469297134Z" level=info msg="Started container" PID=6439 containerID=27b70ccf2a2bc70ed1ba8f6188eaa9754ea4c2fb532d17080ac29e551618a683 description=default/busybox/busybox id=8c512709-0f3f-4b2d-a1de-2aff9c674e12 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4df2b4b18d1176bee7f30e0f2f4a136670c36675d487d11bff26b7ae62a09705
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	27b70ccf2a2bc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          6 seconds ago        Running             busybox                                  0                   4df2b4b18d117       busybox                                     default
	19aef1ec8510c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          16 seconds ago       Running             csi-snapshotter                          0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	61a5097a66804       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          17 seconds ago       Running             csi-provisioner                          0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	621ed44d4d0c9       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            18 seconds ago       Running             liveness-probe                           0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	423188941aea4       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             19 seconds ago       Running             controller                               0                   5f8435b6e04f2       ingress-nginx-controller-675c5ddd98-5ppwr   ingress-nginx
	441d937b8068c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 23 seconds ago       Running             gcp-auth                                 0                   323c55def826a       gcp-auth-78565c9fb4-5728j                   gcp-auth
	0957c0a36894a       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           24 seconds ago       Running             hostpath                                 0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	066ff52c2ddcd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            25 seconds ago       Running             gadget                                   0                   4eb2ecaed9e87       gadget-kzxfz                                gadget
	3552d128c67c5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                27 seconds ago       Running             node-driver-registrar                    0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	97c4cd86f30ed       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             28 seconds ago       Exited              patch                                    2                   abda503e132df       ingress-nginx-admission-patch-tpf9p         ingress-nginx
	e0688bdc55e0b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              28 seconds ago       Running             registry-proxy                           0                   e7362f18db413       registry-proxy-cs2k2                        kube-system
	0f54646dd806e       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     29 seconds ago       Running             nvidia-device-plugin-ctr                 0                   c4c36c0bc4659       nvidia-device-plugin-daemonset-24shm        kube-system
	83682e4a110f1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   37 seconds ago       Running             csi-external-health-monitor-controller   0                   76ed6035570f7       csi-hostpathplugin-86x7s                    kube-system
	7d5469c58bfc4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   38 seconds ago       Exited              patch                                    0                   1dcb5dbb13339       gcp-auth-certs-patch-nt254                  gcp-auth
	ea6861a45ac70       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     39 seconds ago       Running             amd-gpu-device-plugin                    0                   c6d4e2f783cad       amd-gpu-device-plugin-cpl45                 kube-system
	0314c0bc382ed       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      40 seconds ago       Running             volume-snapshot-controller               0                   00d15442e9fe3       snapshot-controller-7d9fbc56b8-d9lzl        kube-system
	8362d34d3550e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   40 seconds ago       Exited              create                                   0                   f3bf9fde8769c       ingress-nginx-admission-create-6rf28        ingress-nginx
	12266be6b9ab3       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      40 seconds ago       Running             volume-snapshot-controller               0                   9ef34a2a027ac       snapshot-controller-7d9fbc56b8-wrh9q        kube-system
	ad7812dcdc930       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   42 seconds ago       Exited              create                                   0                   b15e028fe99b3       gcp-auth-certs-create-9pg6v                 gcp-auth
	7c8dc6d14b139       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             42 seconds ago       Running             csi-attacher                             0                   7976607f84d97       csi-hostpath-attacher-0                     kube-system
	e712266799f11       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           43 seconds ago       Running             registry                                 0                   f1316c3452f72       registry-6b586f9694-ds6k9                   kube-system
	c3bf40d60ab5e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              44 seconds ago       Running             csi-resizer                              0                   2cae445dde2d5       csi-hostpath-resizer-0                      kube-system
	b63192b7f745f       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              46 seconds ago       Running             yakd                                     0                   2c72fc205123b       yakd-dashboard-5ff678cb9-dn24s              yakd-dashboard
	c19ddca298d1e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             49 seconds ago       Running             local-path-provisioner                   0                   1bf1c34fc4541       local-path-provisioner-648f6765c9-zlb8q     local-path-storage
	1c530a50ccecc       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               50 seconds ago       Running             cloud-spanner-emulator                   0                   dfaf3d25c7f4b       cloud-spanner-emulator-86bd5cbb97-xfwfj     default
	db7c2a98e81df       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               52 seconds ago       Running             minikube-ingress-dns                     0                   52ca272c9227c       kube-ingress-dns-minikube                   kube-system
	9bd2912e692dc       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        57 seconds ago       Running             metrics-server                           0                   7b5aa0bab6500       metrics-server-85b7d694d7-g2nwm             kube-system
	ea11dd25ee99e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             58 seconds ago       Running             coredns                                  0                   b9bf05c027e23       coredns-66bc5c9577-58kmh                    kube-system
	6ec65c531ce9b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             58 seconds ago       Running             storage-provisioner                      0                   7e2edd03c74dd       storage-provisioner                         kube-system
	4f25f66b4cedf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   a6c25e9b56e3a       kube-proxy-2f7sr                            kube-system
	a0eba15d448be       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   84e022be55df3       kindnet-qskcd                               kube-system
	c2b16514601ac       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   b6986a1a2b4b0       kube-controller-manager-addons-459729       kube-system
	102e7dda91245       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   79e5b59eeb1c5       kube-scheduler-addons-459729                kube-system
	4150a83c0db93       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   d6e35f5ca53c8       etcd-addons-459729                          kube-system
	7a9a679c5c891       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   d283821e23e4a       kube-apiserver-addons-459729                kube-system
	
	
	==> coredns [ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b] <==
	[INFO] 10.244.0.17:60965 - 33256 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003365004s
	[INFO] 10.244.0.17:43132 - 21927 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000083987s
	[INFO] 10.244.0.17:43132 - 21631 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000122886s
	[INFO] 10.244.0.17:55984 - 62108 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000085792s
	[INFO] 10.244.0.17:55984 - 62274 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000172297s
	[INFO] 10.244.0.17:59534 - 46029 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000085485s
	[INFO] 10.244.0.17:59534 - 45635 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000130065s
	[INFO] 10.244.0.17:35492 - 64690 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118967s
	[INFO] 10.244.0.17:35492 - 64268 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152403s
	[INFO] 10.244.0.21:54006 - 22748 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194069s
	[INFO] 10.244.0.21:45352 - 54900 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00026904s
	[INFO] 10.244.0.21:38334 - 25222 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129109s
	[INFO] 10.244.0.21:34539 - 64672 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000226506s
	[INFO] 10.244.0.21:59972 - 30687 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125292s
	[INFO] 10.244.0.21:34145 - 41111 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153861s
	[INFO] 10.244.0.21:52994 - 11684 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003138228s
	[INFO] 10.244.0.21:36916 - 32432 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004561076s
	[INFO] 10.244.0.21:50024 - 33145 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.003880565s
	[INFO] 10.244.0.21:48825 - 39484 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.0061693s
	[INFO] 10.244.0.21:56944 - 27445 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004052333s
	[INFO] 10.244.0.21:39046 - 54424 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005945025s
	[INFO] 10.244.0.21:51579 - 13184 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004308646s
	[INFO] 10.244.0.21:39799 - 50681 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005368105s
	[INFO] 10.244.0.21:57974 - 51048 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001082611s
	[INFO] 10.244.0.21:51671 - 13280 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001179932s
	
	
	==> describe nodes <==
	Name:               addons-459729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-459729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=addons-459729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_14_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-459729
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-459729"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:14:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-459729
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:16:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:16:28 +0000   Sun, 26 Oct 2025 14:14:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:16:28 +0000   Sun, 26 Oct 2025 14:14:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:16:28 +0000   Sun, 26 Oct 2025 14:14:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:16:28 +0000   Sun, 26 Oct 2025 14:15:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-459729
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f0596a61-354d-402e-9406-4163a5db7e7d
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  default                     cloud-spanner-emulator-86bd5cbb97-xfwfj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  gadget                      gadget-kzxfz                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  gcp-auth                    gcp-auth-78565c9fb4-5728j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-5ppwr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         99s
	  kube-system                 amd-gpu-device-plugin-cpl45                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 coredns-66bc5c9577-58kmh                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 csi-hostpathplugin-86x7s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 etcd-addons-459729                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-qskcd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-addons-459729                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-addons-459729        200m (2%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-proxy-2f7sr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-addons-459729                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 metrics-server-85b7d694d7-g2nwm              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         99s
	  kube-system                 nvidia-device-plugin-daemonset-24shm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 registry-6b586f9694-ds6k9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 registry-creds-764b6fb674-dk4lc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 registry-proxy-cs2k2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 snapshot-controller-7d9fbc56b8-d9lzl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 snapshot-controller-7d9fbc56b8-wrh9q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  local-path-storage          local-path-provisioner-648f6765c9-zlb8q      0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-dn24s               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 99s   kube-proxy       
	  Normal  Starting                 106s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s  kubelet          Node addons-459729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s  kubelet          Node addons-459729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s  kubelet          Node addons-459729 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           102s  node-controller  Node addons-459729 event: Registered Node addons-459729 in Controller
	  Normal  NodeReady                59s   kubelet          Node addons-459729 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a] <==
	{"level":"warn","ts":"2025-10-26T14:15:04.383906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:04.391461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.725132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.731839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.749203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:15:30.756191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:16:06.673503Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.362269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:06.673597Z","caller":"traceutil/trace.go:172","msg":"trace[1407356744] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1094; }","duration":"162.491663ms","start":"2025-10-26T14:16:06.511089Z","end":"2025-10-26T14:16:06.673580Z","steps":["trace[1407356744] 'agreement among raft nodes before linearized reading'  (duration: 44.429446ms)","trace[1407356744] 'range keys from in-memory index tree'  (duration: 117.89894ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:06.675114Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.362867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040893471723429 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-nt254\" mod_revision:1091 > success:<request_put:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-nt254\" value_size:4081 >> failure:<request_range:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-nt254\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T14:16:06.675395Z","caller":"traceutil/trace.go:172","msg":"trace[1217944354] linearizableReadLoop","detail":"{readStateIndex:1125; appliedIndex:1124; }","duration":"119.89607ms","start":"2025-10-26T14:16:06.555484Z","end":"2025-10-26T14:16:06.675380Z","steps":["trace[1217944354] 'read index received'  (duration: 18.538µs)","trace[1217944354] 'applied index is now lower than readState.Index'  (duration: 119.876207ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T14:16:06.675424Z","caller":"traceutil/trace.go:172","msg":"trace[470063816] transaction","detail":"{read_only:false; response_revision:1095; number_of_response:1; }","duration":"196.815195ms","start":"2025-10-26T14:16:06.478586Z","end":"2025-10-26T14:16:06.675401Z","steps":["trace[470063816] 'process raft request'  (duration: 76.965623ms)","trace[470063816] 'compare'  (duration: 117.805435ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:06.675523Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.37334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:06.675739Z","caller":"traceutil/trace.go:172","msg":"trace[1813938213] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"164.587149ms","start":"2025-10-26T14:16:06.511134Z","end":"2025-10-26T14:16:06.675722Z","steps":["trace[1813938213] 'agreement among raft nodes before linearized reading'  (duration: 164.337405ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T14:16:06.839498Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.209135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-10-26T14:16:06.839705Z","caller":"traceutil/trace.go:172","msg":"trace[1627999609] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"157.511557ms","start":"2025-10-26T14:16:06.682155Z","end":"2025-10-26T14:16:06.839666Z","steps":["trace[1627999609] 'process raft request'  (duration: 113.355136ms)","trace[1627999609] 'compare'  (duration: 43.875174ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T14:16:06.839935Z","caller":"traceutil/trace.go:172","msg":"trace[222252180] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1095; }","duration":"134.546756ms","start":"2025-10-26T14:16:06.705111Z","end":"2025-10-26T14:16:06.839657Z","steps":["trace[222252180] 'agreement among raft nodes before linearized reading'  (duration: 90.30554ms)","trace[222252180] 'range keys from in-memory index tree'  (duration: 43.778269ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:10.550138Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.904128ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040893471723491 > lease_revoke:<id:70cc9a20df3b0e67>","response":"size:29"}
	{"level":"info","ts":"2025-10-26T14:16:10.550263Z","caller":"traceutil/trace.go:172","msg":"trace[486118013] linearizableReadLoop","detail":"{readStateIndex:1137; appliedIndex:1136; }","duration":"110.54143ms","start":"2025-10-26T14:16:10.439705Z","end":"2025-10-26T14:16:10.550246Z","steps":["trace[486118013] 'read index received'  (duration: 38.875µs)","trace[486118013] 'applied index is now lower than readState.Index'  (duration: 110.501597ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:10.550396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.681137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:10.550436Z","caller":"traceutil/trace.go:172","msg":"trace[1478287923] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1106; }","duration":"110.733039ms","start":"2025-10-26T14:16:10.439691Z","end":"2025-10-26T14:16:10.550424Z","steps":["trace[1478287923] 'agreement among raft nodes before linearized reading'  (duration: 110.638778ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:16:16.552003Z","caller":"traceutil/trace.go:172","msg":"trace[1516111140] linearizableReadLoop","detail":"{readStateIndex:1173; appliedIndex:1173; }","duration":"112.70591ms","start":"2025-10-26T14:16:16.439268Z","end":"2025-10-26T14:16:16.551974Z","steps":["trace[1516111140] 'read index received'  (duration: 112.69236ms)","trace[1516111140] 'applied index is now lower than readState.Index'  (duration: 11.711µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:16:16.552177Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.877721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:16:16.552215Z","caller":"traceutil/trace.go:172","msg":"trace[102515432] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1141; }","duration":"112.949469ms","start":"2025-10-26T14:16:16.439258Z","end":"2025-10-26T14:16:16.552208Z","steps":["trace[102515432] 'agreement among raft nodes before linearized reading'  (duration: 112.841453ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:16:16.552194Z","caller":"traceutil/trace.go:172","msg":"trace[1700144795] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"131.964726ms","start":"2025-10-26T14:16:16.420209Z","end":"2025-10-26T14:16:16.552174Z","steps":["trace[1700144795] 'process raft request'  (duration: 131.800273ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:16:16.701205Z","caller":"traceutil/trace.go:172","msg":"trace[808680941] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"143.197143ms","start":"2025-10-26T14:16:16.557989Z","end":"2025-10-26T14:16:16.701187Z","steps":["trace[808680941] 'process raft request'  (duration: 143.039766ms)"],"step_count":1}
	
	
	==> gcp-auth [441d937b8068cc86fcb3a873cae9bcb6e3f4a3e79071a803935c38b3f14746aa] <==
	2025/10/26 14:16:19 GCP Auth Webhook started!
	2025/10/26 14:16:34 Ready to marshal response ...
	2025/10/26 14:16:34 Ready to write response ...
	2025/10/26 14:16:34 Ready to marshal response ...
	2025/10/26 14:16:34 Ready to write response ...
	2025/10/26 14:16:34 Ready to marshal response ...
	2025/10/26 14:16:34 Ready to write response ...
	
	
	==> kernel <==
	 14:16:42 up  1:59,  0 user,  load average: 1.72, 0.92, 1.32
	Linux addons-459729 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702] <==
	I1026 14:15:02.545559       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T14:15:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 14:15:02.856397       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 14:15:02.856470       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 14:15:02.856484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 14:15:02.857350       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 14:15:32.855723       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 14:15:32.855783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 14:15:32.856651       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 14:15:32.857955       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1026 14:15:34.557284       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 14:15:34.557328       1 metrics.go:72] Registering metrics
	I1026 14:15:34.557385       1 controller.go:711] "Syncing nftables rules"
	I1026 14:15:42.858844       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:15:42.858910       1 main.go:301] handling current node
	I1026 14:15:52.855249       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:15:52.855292       1 main.go:301] handling current node
	I1026 14:16:02.854871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:16:02.854904       1 main.go:301] handling current node
	I1026 14:16:12.858243       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:16:12.858296       1 main.go:301] handling current node
	I1026 14:16:22.854387       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:16:22.854439       1 main.go:301] handling current node
	I1026 14:16:32.856519       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:16:32.856567       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321] <==
	E1026 14:15:47.285146       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.72.119:443: connect: connection refused" logger="UnhandledError"
	W1026 14:15:47.285402       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:15:47.285479       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 14:15:47.285903       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.72.119:443: connect: connection refused" logger="UnhandledError"
	W1026 14:15:48.288076       1 handler_proxy.go:99] no RequestInfo found in the context
	W1026 14:15:48.288110       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:15:48.288150       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 14:15:48.288194       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1026 14:15:48.288197       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 14:15:48.289343       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 14:15:52.296856       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 14:15:52.296916       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 14:15:52.297001       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.72.119:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1026 14:15:52.305409       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 14:16:40.620694       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43486: use of closed network connection
	E1026 14:16:40.776236       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43516: use of closed network connection
	
	
	==> kube-controller-manager [c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c] <==
	I1026 14:15:00.709179       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:15:00.709318       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:15:00.709123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 14:15:00.709809       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:15:00.711675       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:00.711687       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:15:00.713528       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 14:15:00.714754       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:00.716565       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 14:15:00.716650       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 14:15:00.716691       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 14:15:00.716697       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 14:15:00.716703       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 14:15:00.717979       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 14:15:00.723523       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-459729" podCIDRs=["10.244.0.0/24"]
	I1026 14:15:00.729033       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 14:15:03.029718       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1026 14:15:30.716451       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 14:15:30.716588       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1026 14:15:30.716640       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 14:15:30.737460       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1026 14:15:30.741504       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1026 14:15:30.817050       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:15:30.842433       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:15:45.647726       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78] <==
	I1026 14:15:02.702855       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:15:02.996001       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:15:03.096217       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:15:03.096266       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:15:03.096360       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:15:03.183548       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:15:03.183613       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:15:03.194275       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:15:03.197537       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:15:03.197760       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:15:03.199843       1 config.go:200] "Starting service config controller"
	I1026 14:15:03.200789       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:15:03.200404       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:15:03.200979       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:15:03.200421       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:15:03.200995       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:15:03.201046       1 config.go:309] "Starting node config controller"
	I1026 14:15:03.201051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:15:03.201056       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:15:03.301636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:15:03.301650       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:15:03.301679       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed] <==
	E1026 14:14:53.714707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 14:14:53.714721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 14:14:53.714925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:14:53.715056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:14:53.715252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:14:53.715267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:14:53.715339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 14:14:53.715410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:14:53.715473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:14:53.715543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:14:53.715570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:14:53.716376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 14:14:54.598092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:14:54.611465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:14:54.687609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 14:14:54.701877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:14:54.779666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:14:54.787848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 14:14:54.799124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:14:54.827579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:14:54.851711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 14:14:54.882786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:14:54.883667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 14:14:54.953839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1026 14:14:57.411028       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:16:13 addons-459729 kubelet[1307]: I1026 14:16:13.374968    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-24shm" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:16:13 addons-459729 kubelet[1307]: I1026 14:16:13.390053    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-24shm" podStartSLOduration=1.7049941180000001 podStartE2EDuration="30.390037212s" podCreationTimestamp="2025-10-26 14:15:43 +0000 UTC" firstStartedPulling="2025-10-26 14:15:43.732453251 +0000 UTC m=+47.732643709" lastFinishedPulling="2025-10-26 14:16:12.417496352 +0000 UTC m=+76.417686803" observedRunningTime="2025-10-26 14:16:13.388240112 +0000 UTC m=+77.388430579" watchObservedRunningTime="2025-10-26 14:16:13.390037212 +0000 UTC m=+77.390227678"
	Oct 26 14:16:14 addons-459729 kubelet[1307]: I1026 14:16:14.086757    1307 scope.go:117] "RemoveContainer" containerID="1af90288ee6b33fd259b71329131383fca9bfac54ed12f7671866360c040e03a"
	Oct 26 14:16:14 addons-459729 kubelet[1307]: I1026 14:16:14.381985    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cs2k2" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:16:14 addons-459729 kubelet[1307]: I1026 14:16:14.385471    1307 scope.go:117] "RemoveContainer" containerID="1af90288ee6b33fd259b71329131383fca9bfac54ed12f7671866360c040e03a"
	Oct 26 14:16:14 addons-459729 kubelet[1307]: I1026 14:16:14.385583    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-24shm" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:16:14 addons-459729 kubelet[1307]: I1026 14:16:14.397853    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-cs2k2" podStartSLOduration=1.429714068 podStartE2EDuration="31.397824921s" podCreationTimestamp="2025-10-26 14:15:43 +0000 UTC" firstStartedPulling="2025-10-26 14:15:43.747285015 +0000 UTC m=+47.747475464" lastFinishedPulling="2025-10-26 14:16:13.715395869 +0000 UTC m=+77.715586317" observedRunningTime="2025-10-26 14:16:14.396100096 +0000 UTC m=+78.396290581" watchObservedRunningTime="2025-10-26 14:16:14.397824921 +0000 UTC m=+78.398015390"
	Oct 26 14:16:15 addons-459729 kubelet[1307]: E1026 14:16:15.260881    1307 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 26 14:16:15 addons-459729 kubelet[1307]: E1026 14:16:15.261002    1307 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11a2adc0-f603-426f-af30-919a48eee4bc-gcr-creds podName:11a2adc0-f603-426f-af30-919a48eee4bc nodeName:}" failed. No retries permitted until 2025-10-26 14:16:47.260974557 +0000 UTC m=+111.261165026 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/11a2adc0-f603-426f-af30-919a48eee4bc-gcr-creds") pod "registry-creds-764b6fb674-dk4lc" (UID: "11a2adc0-f603-426f-af30-919a48eee4bc") : secret "registry-creds-gcr" not found
	Oct 26 14:16:15 addons-459729 kubelet[1307]: I1026 14:16:15.396325    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cs2k2" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:16:15 addons-459729 kubelet[1307]: I1026 14:16:15.564305    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jwvc\" (UniqueName: \"kubernetes.io/projected/aa9591b8-7508-4ad8-8460-012380f51924-kube-api-access-2jwvc\") pod \"aa9591b8-7508-4ad8-8460-012380f51924\" (UID: \"aa9591b8-7508-4ad8-8460-012380f51924\") "
	Oct 26 14:16:15 addons-459729 kubelet[1307]: I1026 14:16:15.567080    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9591b8-7508-4ad8-8460-012380f51924-kube-api-access-2jwvc" (OuterVolumeSpecName: "kube-api-access-2jwvc") pod "aa9591b8-7508-4ad8-8460-012380f51924" (UID: "aa9591b8-7508-4ad8-8460-012380f51924"). InnerVolumeSpecName "kube-api-access-2jwvc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 26 14:16:15 addons-459729 kubelet[1307]: I1026 14:16:15.665148    1307 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2jwvc\" (UniqueName: \"kubernetes.io/projected/aa9591b8-7508-4ad8-8460-012380f51924-kube-api-access-2jwvc\") on node \"addons-459729\" DevicePath \"\""
	Oct 26 14:16:16 addons-459729 kubelet[1307]: I1026 14:16:16.405388    1307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abda503e132dfe1a6981f175cff912edc7283a15d7abe120be418645e6a37107"
	Oct 26 14:16:17 addons-459729 kubelet[1307]: I1026 14:16:17.424935    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-kzxfz" podStartSLOduration=68.189095587 podStartE2EDuration="1m14.424910788s" podCreationTimestamp="2025-10-26 14:15:03 +0000 UTC" firstStartedPulling="2025-10-26 14:16:10.918105692 +0000 UTC m=+74.918296138" lastFinishedPulling="2025-10-26 14:16:17.15392089 +0000 UTC m=+81.154111339" observedRunningTime="2025-10-26 14:16:17.424376469 +0000 UTC m=+81.424566936" watchObservedRunningTime="2025-10-26 14:16:17.424910788 +0000 UTC m=+81.425101256"
	Oct 26 14:16:19 addons-459729 kubelet[1307]: I1026 14:16:19.144974    1307 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 26 14:16:19 addons-459729 kubelet[1307]: I1026 14:16:19.145029    1307 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 26 14:16:19 addons-459729 kubelet[1307]: I1026 14:16:19.439981    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-5728j" podStartSLOduration=65.774217408 podStartE2EDuration="1m9.439955002s" podCreationTimestamp="2025-10-26 14:15:10 +0000 UTC" firstStartedPulling="2025-10-26 14:16:15.574108043 +0000 UTC m=+79.574298495" lastFinishedPulling="2025-10-26 14:16:19.239845627 +0000 UTC m=+83.240036089" observedRunningTime="2025-10-26 14:16:19.4392575 +0000 UTC m=+83.439447984" watchObservedRunningTime="2025-10-26 14:16:19.439955002 +0000 UTC m=+83.440145457"
	Oct 26 14:16:23 addons-459729 kubelet[1307]: I1026 14:16:23.460551    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-5ppwr" podStartSLOduration=73.004755912 podStartE2EDuration="1m20.460524738s" podCreationTimestamp="2025-10-26 14:15:03 +0000 UTC" firstStartedPulling="2025-10-26 14:16:15.622995668 +0000 UTC m=+79.623186127" lastFinishedPulling="2025-10-26 14:16:23.078764507 +0000 UTC m=+87.078954953" observedRunningTime="2025-10-26 14:16:23.459856996 +0000 UTC m=+87.460047479" watchObservedRunningTime="2025-10-26 14:16:23.460524738 +0000 UTC m=+87.460715205"
	Oct 26 14:16:26 addons-459729 kubelet[1307]: I1026 14:16:26.482319    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-86x7s" podStartSLOduration=1.021312807 podStartE2EDuration="43.482293047s" podCreationTimestamp="2025-10-26 14:15:43 +0000 UTC" firstStartedPulling="2025-10-26 14:15:43.730050858 +0000 UTC m=+47.730241319" lastFinishedPulling="2025-10-26 14:16:26.191031113 +0000 UTC m=+90.191221559" observedRunningTime="2025-10-26 14:16:26.481743225 +0000 UTC m=+90.481933702" watchObservedRunningTime="2025-10-26 14:16:26.482293047 +0000 UTC m=+90.482483515"
	Oct 26 14:16:34 addons-459729 kubelet[1307]: I1026 14:16:34.089041    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25566aca-bb35-4fa2-a799-2a637fb86342" path="/var/lib/kubelet/pods/25566aca-bb35-4fa2-a799-2a637fb86342/volumes"
	Oct 26 14:16:34 addons-459729 kubelet[1307]: I1026 14:16:34.623348    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/34ab5631-8a88-449f-95bb-06d39c99c9a5-gcp-creds\") pod \"busybox\" (UID: \"34ab5631-8a88-449f-95bb-06d39c99c9a5\") " pod="default/busybox"
	Oct 26 14:16:34 addons-459729 kubelet[1307]: I1026 14:16:34.623520    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hb8\" (UniqueName: \"kubernetes.io/projected/34ab5631-8a88-449f-95bb-06d39c99c9a5-kube-api-access-w2hb8\") pod \"busybox\" (UID: \"34ab5631-8a88-449f-95bb-06d39c99c9a5\") " pod="default/busybox"
	Oct 26 14:16:35 addons-459729 kubelet[1307]: I1026 14:16:35.521442    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.890497232 podStartE2EDuration="1.521417823s" podCreationTimestamp="2025-10-26 14:16:34 +0000 UTC" firstStartedPulling="2025-10-26 14:16:34.783033479 +0000 UTC m=+98.783223928" lastFinishedPulling="2025-10-26 14:16:35.41395407 +0000 UTC m=+99.414144519" observedRunningTime="2025-10-26 14:16:35.520376795 +0000 UTC m=+99.520567285" watchObservedRunningTime="2025-10-26 14:16:35.521417823 +0000 UTC m=+99.521608289"
	Oct 26 14:16:36 addons-459729 kubelet[1307]: I1026 14:16:36.088819    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="974f1071-2121-4b37-aab5-9fe37362e16c" path="/var/lib/kubelet/pods/974f1071-2121-4b37-aab5-9fe37362e16c/volumes"
	
	
	==> storage-provisioner [6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539] <==
	W1026 14:16:18.130503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:20.134544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:20.139879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:22.143989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:22.163696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:24.167676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:24.172359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:26.175577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:26.180017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:28.183144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:28.187295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:30.191065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:30.195551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:32.198598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:32.203842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:34.207503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:34.211470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:36.214790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:36.218835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:38.222022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:38.226945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:40.230744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:40.236884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:42.240570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:16:42.244701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-459729 -n addons-459729
helpers_test.go:269: (dbg) Run:  kubectl --context addons-459729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-459729 describe pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-459729 describe pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc: exit status 1 (62.895901ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6rf28" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tpf9p" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-dk4lc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-459729 describe pod ingress-nginx-admission-create-6rf28 ingress-nginx-admission-patch-tpf9p registry-creds-764b6fb674-dk4lc: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable headlamp --alsologtostderr -v=1: exit status 11 (256.772394ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:43.538282  855595 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:43.538571  855595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:43.538582  855595 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:43.538587  855595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:43.538826  855595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:43.539143  855595 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:43.539543  855595 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:43.539562  855595 addons.go:606] checking whether the cluster is paused
	I1026 14:16:43.539671  855595 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:43.539688  855595 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:43.540113  855595 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:43.558879  855595 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:43.558961  855595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:43.576925  855595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:43.678143  855595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:43.678247  855595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:43.708003  855595 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:43.708031  855595 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:43.708035  855595 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:43.708038  855595 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:43.708041  855595 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:43.708045  855595 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:43.708048  855595 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:43.708050  855595 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:43.708053  855595 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:43.708064  855595 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:43.708067  855595 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:43.708071  855595 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:43.708073  855595 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:43.708076  855595 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:43.708086  855595 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:43.708091  855595 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:43.708093  855595 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:43.708101  855595 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:43.708104  855595 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:43.708106  855595 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:43.708108  855595 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:43.708111  855595 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:43.708113  855595 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:43.708133  855595 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:43.708138  855595 cri.go:89] found id: ""
	I1026 14:16:43.708203  855595 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:43.722789  855595 out.go:203] 
	W1026 14:16:43.724184  855595 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:43.724208  855595 out.go:285] * 
	* 
	W1026 14:16:43.728969  855595 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:43.730138  855595 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-xfwfj" [3ca10933-55aa-40fe-900e-37a7cc59c07b] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003855881s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (276.397389ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:51.390584  856185 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:51.390895  856185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:51.390906  856185 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:51.390910  856185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:51.391121  856185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:51.391481  856185 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:51.391888  856185 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:51.391906  856185 addons.go:606] checking whether the cluster is paused
	I1026 14:16:51.392001  856185 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:51.392019  856185 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:51.392487  856185 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:51.413251  856185 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:51.413316  856185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:51.432651  856185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:51.541037  856185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:51.541136  856185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:51.578808  856185 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:51.578851  856185 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:51.578855  856185 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:51.578860  856185 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:51.578862  856185 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:51.578866  856185 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:51.578868  856185 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:51.578871  856185 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:51.578873  856185 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:51.578883  856185 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:51.578887  856185 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:51.578889  856185 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:51.578891  856185 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:51.578894  856185 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:51.578896  856185 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:51.578911  856185 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:51.578918  856185 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:51.578922  856185 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:51.578925  856185 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:51.578927  856185 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:51.578930  856185 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:51.578932  856185 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:51.578935  856185 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:51.578937  856185 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:51.578939  856185 cri.go:89] found id: ""
	I1026 14:16:51.579006  856185 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:51.595046  856185 out.go:203] 
	W1026 14:16:51.596299  856185 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:51.596318  856185 out.go:285] * 
	* 
	W1026 14:16:51.602446  856185 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:51.603745  856185 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-459729 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-459729 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-459729 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c9e9e611-a9b2-4cdd-8838-9c8530043078] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c9e9e611-a9b2-4cdd-8838-9c8530043078] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c9e9e611-a9b2-4cdd-8838-9c8530043078] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003121104s
addons_test.go:967: (dbg) Run:  kubectl --context addons-459729 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 ssh "cat /opt/local-path-provisioner/pvc-618f90bd-473d-4ea6-99a0-92fd8df748d0_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-459729 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-459729 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (277.227682ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:51.704590  856308 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:51.704945  856308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:51.704956  856308 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:51.704961  856308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:51.705257  856308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:51.705608  856308 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:51.706041  856308 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:51.706061  856308 addons.go:606] checking whether the cluster is paused
	I1026 14:16:51.706157  856308 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:51.706197  856308 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:51.706633  856308 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:51.724651  856308 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:51.724746  856308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:51.742926  856308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:51.844724  856308 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:51.844823  856308 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:51.881212  856308 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:51.881239  856308 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:51.881252  856308 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:51.881257  856308 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:51.881262  856308 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:51.881268  856308 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:51.881273  856308 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:51.881277  856308 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:51.881281  856308 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:51.881299  856308 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:51.881308  856308 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:51.881312  856308 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:51.881316  856308 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:51.881321  856308 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:51.881325  856308 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:51.881331  856308 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:51.881340  856308 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:51.881347  856308 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:51.881351  856308 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:51.881355  856308 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:51.881359  856308 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:51.881363  856308 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:51.881367  856308 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:51.881371  856308 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:51.881375  856308 cri.go:89] found id: ""
	I1026 14:16:51.881424  856308 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:51.902355  856308 out.go:203] 
	W1026 14:16:51.903951  856308 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:51.904017  856308 out.go:285] * 
	* 
	W1026 14:16:51.910668  856308 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:51.912405  856308 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-24shm" [1bb55f2d-872c-4696-aac2-64ab714c33e4] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00396467s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (274.801078ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:46.110699  855773 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:46.111039  855773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:46.111051  855773 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:46.111056  855773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:46.111322  855773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:46.111718  855773 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:46.112224  855773 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:46.112247  855773 addons.go:606] checking whether the cluster is paused
	I1026 14:16:46.112361  855773 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:46.112383  855773 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:46.112836  855773 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:46.135856  855773 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:46.135946  855773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:46.155022  855773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:46.260020  855773 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:46.260184  855773 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:46.292276  855773 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:46.292312  855773 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:46.292317  855773 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:46.292320  855773 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:46.292322  855773 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:46.292330  855773 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:46.292333  855773 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:46.292336  855773 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:46.292338  855773 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:46.292352  855773 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:46.292355  855773 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:46.292358  855773 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:46.292361  855773 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:46.292364  855773 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:46.292366  855773 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:46.292373  855773 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:46.292378  855773 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:46.292382  855773 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:46.292385  855773 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:46.292387  855773 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:46.292390  855773 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:46.292392  855773 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:46.292394  855773 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:46.292396  855773 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:46.292399  855773 cri.go:89] found id: ""
	I1026 14:16:46.292446  855773 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:46.307332  855773 out.go:203] 
	W1026 14:16:46.308358  855773 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:46.308377  855773 out.go:285] * 
	* 
	W1026 14:16:46.313527  855773 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:46.318201  855773 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-dn24s" [f62a223c-0f83-436c-beb9-89d61469f560] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003166115s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable yakd --alsologtostderr -v=1: exit status 11 (259.398642ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:17:00.133250  857582 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:17:00.133498  857582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:17:00.133507  857582 out.go:374] Setting ErrFile to fd 2...
	I1026 14:17:00.133511  857582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:17:00.133712  857582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:17:00.134023  857582 mustload.go:65] Loading cluster: addons-459729
	I1026 14:17:00.134443  857582 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:17:00.134464  857582 addons.go:606] checking whether the cluster is paused
	I1026 14:17:00.134554  857582 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:17:00.134572  857582 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:17:00.134981  857582 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:17:00.153549  857582 ssh_runner.go:195] Run: systemctl --version
	I1026 14:17:00.153620  857582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:17:00.171465  857582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:17:00.272310  857582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:17:00.272388  857582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:17:00.302561  857582 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:17:00.302589  857582 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:17:00.302594  857582 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:17:00.302599  857582 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:17:00.302603  857582 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:17:00.302607  857582 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:17:00.302611  857582 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:17:00.302615  857582 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:17:00.302619  857582 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:17:00.302626  857582 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:17:00.302631  857582 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:17:00.302634  857582 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:17:00.302639  857582 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:17:00.302643  857582 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:17:00.302648  857582 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:17:00.302666  857582 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:17:00.302674  857582 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:17:00.302680  857582 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:17:00.302684  857582 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:17:00.302687  857582 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:17:00.302691  857582 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:17:00.302696  857582 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:17:00.302702  857582 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:17:00.302707  857582 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:17:00.302712  857582 cri.go:89] found id: ""
	I1026 14:17:00.302762  857582 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:17:00.318339  857582 out.go:203] 
	W1026 14:17:00.320486  857582 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:17:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:17:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:17:00.320514  857582 out.go:285] * 
	* 
	W1026 14:17:00.326274  857582 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:17:00.327526  857582 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-cpl45" [3361dd34-f7d4-4824-b347-6f718134c1bc] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003373715s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-459729 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-459729 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (265.178858ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:16:57.543483  857438 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:16:57.543758  857438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:57.543767  857438 out.go:374] Setting ErrFile to fd 2...
	I1026 14:16:57.543771  857438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:16:57.543990  857438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:16:57.544288  857438 mustload.go:65] Loading cluster: addons-459729
	I1026 14:16:57.544662  857438 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:57.544677  857438 addons.go:606] checking whether the cluster is paused
	I1026 14:16:57.544758  857438 config.go:182] Loaded profile config "addons-459729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:57.544775  857438 host.go:66] Checking if "addons-459729" exists ...
	I1026 14:16:57.545188  857438 cli_runner.go:164] Run: docker container inspect addons-459729 --format={{.State.Status}}
	I1026 14:16:57.563538  857438 ssh_runner.go:195] Run: systemctl --version
	I1026 14:16:57.563595  857438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-459729
	I1026 14:16:57.582489  857438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33536 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/addons-459729/id_rsa Username:docker}
	I1026 14:16:57.683097  857438 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:57.683205  857438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:57.715413  857438 cri.go:89] found id: "19aef1ec8510c14e849b7cefcdc09f57ad870ee7d19676222f9e11dadd8cc042"
	I1026 14:16:57.715434  857438 cri.go:89] found id: "61a5097a66804c567922e9da53afc210c2fdbb85ff910118e9760dee39f0d040"
	I1026 14:16:57.715438  857438 cri.go:89] found id: "621ed44d4d0c9c98dcc6f5d7791c964154a9fdfc066b031a81eea94bead4881f"
	I1026 14:16:57.715441  857438 cri.go:89] found id: "0957c0a36894ac64f64707cab794cc2ea3ec3052b89e5973d410bc3d470f0ccc"
	I1026 14:16:57.715444  857438 cri.go:89] found id: "3552d128c67c5f8bc101f8fec4ea4a567c8e554450e010cea9fff33e2fb35c57"
	I1026 14:16:57.715447  857438 cri.go:89] found id: "e0688bdc55e0b1428d713099dfcdead41642afc46111de5efa3f9e8fc577a82f"
	I1026 14:16:57.715450  857438 cri.go:89] found id: "0f54646dd806e6f1d2d2a55010ade3d07b7c4c78f14093b5ea24c778c704d8d9"
	I1026 14:16:57.715452  857438 cri.go:89] found id: "83682e4a110f1836b76b9ab37ae5bdb5165df03ddd6d4aab400697fb4757a66a"
	I1026 14:16:57.715455  857438 cri.go:89] found id: "ea6861a45ac70f5a40063121e871650cf8d06fbf282521746f2f1cec0f96e741"
	I1026 14:16:57.715470  857438 cri.go:89] found id: "0314c0bc382ed36965ef868e31dc0f76b6d82e34f43bf5a49c4799ecd426990c"
	I1026 14:16:57.715473  857438 cri.go:89] found id: "12266be6b9ab3bae1170a4813366b003d8d74419265ae8317f745310842b0eb6"
	I1026 14:16:57.715475  857438 cri.go:89] found id: "7c8dc6d14b139c980202322abce8e8be08218ec570fe222c54763e5032be2feb"
	I1026 14:16:57.715478  857438 cri.go:89] found id: "e712266799f113c6e29070d3b446eb814ab3d82a01e5503cf6d420bc5d9dd807"
	I1026 14:16:57.715480  857438 cri.go:89] found id: "c3bf40d60ab5e31a883ca325e0e0ec980516a554582873a5c7653558a6a05c25"
	I1026 14:16:57.715483  857438 cri.go:89] found id: "db7c2a98e81dfa3a84fa710f2fe409325e697b34c28852544eccec3493ba6c36"
	I1026 14:16:57.715487  857438 cri.go:89] found id: "9bd2912e692dc7dc8832b9f484bdfcb583e9e399f257d572d4fddb38842ac29a"
	I1026 14:16:57.715489  857438 cri.go:89] found id: "ea11dd25ee99edc9b27421bacea724bf74b1fec81e1f33251d8241d538f0bd7b"
	I1026 14:16:57.715494  857438 cri.go:89] found id: "6ec65c531ce9b20e7dfdb9cdb1623754497a4088bbed9f545ad3b0f28e423539"
	I1026 14:16:57.715496  857438 cri.go:89] found id: "4f25f66b4cedfe4a67445f7535bebe5278f7e84ec91c43ad9eee37d250277e78"
	I1026 14:16:57.715498  857438 cri.go:89] found id: "a0eba15d448bec4198d79695967a6f8e6718f30814fcdde9252cc843d58f1702"
	I1026 14:16:57.715501  857438 cri.go:89] found id: "c2b16514601ac206983ecc827f418a7f7c9779b86a8ac77a095c139429ddb09c"
	I1026 14:16:57.715503  857438 cri.go:89] found id: "102e7dda912458a4fb7c5cf795d24e3f7f8111609a7f9f3d6aa2ac793be7d8ed"
	I1026 14:16:57.715506  857438 cri.go:89] found id: "4150a83c0db93bd824ae7492cd5bbd3cd5b925dc5e29702692a93bb4ebe91e4a"
	I1026 14:16:57.715508  857438 cri.go:89] found id: "7a9a679c5c891888d2fe6da11a5021a47a92d61386bbbc79c23ddd0de01e1321"
	I1026 14:16:57.715511  857438 cri.go:89] found id: ""
	I1026 14:16:57.715549  857438 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 14:16:57.732577  857438 out.go:203] 
	W1026 14:16:57.734054  857438 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:16:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 14:16:57.734080  857438 out.go:285] * 
	* 
	W1026 14:16:57.741766  857438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 14:16:57.743483  857438 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-459729 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-656017 --alsologtostderr -v=1]
E1026 14:31:34.525187  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:32:02.231521  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-656017 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-656017 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-656017 --alsologtostderr -v=1] stderr:
I1026 14:29:41.928507  884928 out.go:360] Setting OutFile to fd 1 ...
I1026 14:29:41.928774  884928 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:29:41.928785  884928 out.go:374] Setting ErrFile to fd 2...
I1026 14:29:41.928789  884928 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:29:41.928997  884928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
I1026 14:29:41.929313  884928 mustload.go:65] Loading cluster: functional-656017
I1026 14:29:41.929638  884928 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:29:41.930052  884928 cli_runner.go:164] Run: docker container inspect functional-656017 --format={{.State.Status}}
I1026 14:29:41.947819  884928 host.go:66] Checking if "functional-656017" exists ...
I1026 14:29:41.948124  884928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1026 14:29:42.002205  884928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.992757955 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1026 14:29:42.002337  884928 api_server.go:166] Checking apiserver status ...
I1026 14:29:42.002385  884928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1026 14:29:42.002420  884928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656017
I1026 14:29:42.020174  884928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33546 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/functional-656017/id_rsa Username:docker}
I1026 14:29:42.124375  884928 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4229/cgroup
W1026 14:29:42.132871  884928 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4229/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1026 14:29:42.132937  884928 ssh_runner.go:195] Run: ls
I1026 14:29:42.136813  884928 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1026 14:29:42.141033  884928 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1026 14:29:42.141079  884928 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1026 14:29:42.141245  884928 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:29:42.141257  884928 addons.go:69] Setting dashboard=true in profile "functional-656017"
I1026 14:29:42.141264  884928 addons.go:238] Setting addon dashboard=true in "functional-656017"
I1026 14:29:42.141289  884928 host.go:66] Checking if "functional-656017" exists ...
I1026 14:29:42.141594  884928 cli_runner.go:164] Run: docker container inspect functional-656017 --format={{.State.Status}}
I1026 14:29:42.161393  884928 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1026 14:29:42.162823  884928 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1026 14:29:42.164083  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1026 14:29:42.164121  884928 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1026 14:29:42.164210  884928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656017
I1026 14:29:42.182316  884928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33546 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/functional-656017/id_rsa Username:docker}
I1026 14:29:42.288082  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1026 14:29:42.288117  884928 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1026 14:29:42.300841  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1026 14:29:42.300867  884928 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1026 14:29:42.314180  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1026 14:29:42.314206  884928 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1026 14:29:42.327027  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1026 14:29:42.327048  884928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1026 14:29:42.340864  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1026 14:29:42.340905  884928 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1026 14:29:42.354134  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1026 14:29:42.354158  884928 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1026 14:29:42.367542  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1026 14:29:42.367575  884928 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1026 14:29:42.380747  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1026 14:29:42.380774  884928 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1026 14:29:42.393631  884928 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1026 14:29:42.393657  884928 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1026 14:29:42.406230  884928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1026 14:29:42.835706  884928 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-656017 addons enable metrics-server

                                                
                                                
I1026 14:29:42.837117  884928 addons.go:201] Writing out "functional-656017" config to set dashboard=true...
W1026 14:29:42.837382  884928 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1026 14:29:42.838027  884928 kapi.go:59] client config for functional-656017: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.key", CAFile:"/home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1026 14:29:42.838483  884928 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1026 14:29:42.838498  884928 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1026 14:29:42.838509  884928 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1026 14:29:42.838517  884928 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1026 14:29:42.838521  884928 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1026 14:29:42.846193  884928 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  5c41ff19-4398-473e-a595-3bc23266cddc 797 0 2025-10-26 14:29:42 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-26 14:29:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.108.251.128,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.108.251.128],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1026 14:29:42.846352  884928 out.go:285] * Launching proxy ...
* Launching proxy ...
I1026 14:29:42.846417  884928 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-656017 proxy --port 36195]
I1026 14:29:42.846722  884928 dashboard.go:157] Waiting for kubectl to output host:port ...
I1026 14:29:42.889755  884928 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1026 14:29:42.889818  884928 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1026 14:29:42.897755  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6c2b7e4f-9aa7-4873-85df-81dd61d48904] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc00002f8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7180 TLS:<nil>}
I1026 14:29:42.897832  884928 retry.go:31] will retry after 66.889µs: Temporary Error: unexpected response code: 503
I1026 14:29:42.901315  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8a89e17c-a793-4a77-b2f0-072a747fbe97] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc000b01280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000592dc0 TLS:<nil>}
I1026 14:29:42.901391  884928 retry.go:31] will retry after 209.398µs: Temporary Error: unexpected response code: 503
I1026 14:29:42.904640  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[29105ef3-080f-42ad-a7dd-ac6bd8bbf07e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc000b01380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317a40 TLS:<nil>}
I1026 14:29:42.904691  884928 retry.go:31] will retry after 128.144µs: Temporary Error: unexpected response code: 503
I1026 14:29:42.908089  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2a1f163b-32ae-4333-9c5d-bc786e6f4209] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc000b01440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317b80 TLS:<nil>}
I1026 14:29:42.908144  884928 retry.go:31] will retry after 205.861µs: Temporary Error: unexpected response code: 503
I1026 14:29:42.911474  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[298d9f06-b881-4eba-87c7-f5f1693d7475] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc00002fa80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317cc0 TLS:<nil>}
I1026 14:29:42.911518  884928 retry.go:31] will retry after 724.068µs: Temporary Error: unexpected response code: 503
I1026 14:29:42.914636  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[faf7d76a-ef93-4ca8-9849-b589247ca5c2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc00176ee00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000592f00 TLS:<nil>}
I1026 14:29:42.914690  884928 retry.go:31] will retry after 713.722µs: Temporary Error: unexpected response code: 503
I1026 14:29:42.919090  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dbe08d7f-f591-40c9-81ef-30480fe0dc0d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc00002fb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b72c0 TLS:<nil>}
I1026 14:29:42.919139  884928 retry.go:31] will retry after 1.072791ms: Temporary Error: unexpected response code: 503
I1026 14:29:42.923256  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[adfc23db-ecef-436e-af82-e4fbc265d0b0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc000b01500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000593040 TLS:<nil>}
I1026 14:29:42.923296  884928 retry.go:31] will retry after 1.392271ms: Temporary Error: unexpected response code: 503
I1026 14:29:42.927668  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[98788ad9-a288-43f1-9fc5-574b9b1bcdfa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc00002fcc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497040 TLS:<nil>}
I1026 14:29:42.927714  884928 retry.go:31] will retry after 3.615311ms: Temporary Error: unexpected response code: 503
I1026 14:29:42.934080  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f3d14f13-3ea4-49aa-bf0d-2774e2e90b3f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc000b015c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000593180 TLS:<nil>}
I1026 14:29:42.934129  884928 retry.go:31] will retry after 5.708523ms: Temporary Error: unexpected response code: 503
I1026 14:29:42.942744  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ce9d8a5-b929-483d-89c1-5b9a7587cc30] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc00179a0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497180 TLS:<nil>}
I1026 14:29:42.942825  884928 retry.go:31] will retry after 3.039508ms: Temporary Error: unexpected response code: 503
I1026 14:29:42.948190  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a4cdfbd0-1d1a-4229-9018-6de2d94bae5b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc000b016c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005932c0 TLS:<nil>}
I1026 14:29:42.948236  884928 retry.go:31] will retry after 8.200881ms: Temporary Error: unexpected response code: 503
I1026 14:29:42.959869  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c739f763-01ee-4b61-89f4-d3420b068365] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc000b01780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004972c0 TLS:<nil>}
I1026 14:29:42.959924  884928 retry.go:31] will retry after 9.194764ms: Temporary Error: unexpected response code: 503
I1026 14:29:42.972660  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[91090c8b-8680-43d6-b801-9083041f7ef1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc00176ef80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000497400 TLS:<nil>}
I1026 14:29:42.972727  884928 retry.go:31] will retry after 12.208307ms: Temporary Error: unexpected response code: 503
I1026 14:29:42.988760  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[282fe019-b3d1-472f-bcd4-3aed4119cb4a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:42 GMT]] Body:0xc00179a1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7540 TLS:<nil>}
I1026 14:29:42.988821  884928 retry.go:31] will retry after 17.832129ms: Temporary Error: unexpected response code: 503
I1026 14:29:43.009988  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[34145e32-2b09-49c1-8c30-a12db0a01abf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:43 GMT]] Body:0xc000b018c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000593400 TLS:<nil>}
I1026 14:29:43.010064  884928 retry.go:31] will retry after 47.377868ms: Temporary Error: unexpected response code: 503
I1026 14:29:43.060775  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[390882f8-2475-49eb-bce1-1cc994bd6b62] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:43 GMT]] Body:0xc00176f040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031a000 TLS:<nil>}
I1026 14:29:43.060835  884928 retry.go:31] will retry after 67.449909ms: Temporary Error: unexpected response code: 503
I1026 14:29:43.132327  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0e51ff19-94c2-47c7-91fd-816c7e4af405] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:43 GMT]] Body:0xc00176f100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b77c0 TLS:<nil>}
I1026 14:29:43.132413  884928 retry.go:31] will retry after 142.231323ms: Temporary Error: unexpected response code: 503
I1026 14:29:43.279047  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c3603a21-abbe-4b94-8f40-b88ea8c7abbd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:43 GMT]] Body:0xc00176f200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be000 TLS:<nil>}
I1026 14:29:43.279122  884928 retry.go:31] will retry after 107.653131ms: Temporary Error: unexpected response code: 503
I1026 14:29:43.390591  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5ccd5e65-1e6a-49b9-8e90-f8f7222608db] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:43 GMT]] Body:0xc00179a2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be140 TLS:<nil>}
I1026 14:29:43.390654  884928 retry.go:31] will retry after 239.213636ms: Temporary Error: unexpected response code: 503
I1026 14:29:43.632875  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1c57fcfd-57a4-4327-a321-78ce6ef112d0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:43 GMT]] Body:0xc000b01a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000593540 TLS:<nil>}
I1026 14:29:43.632941  884928 retry.go:31] will retry after 355.452685ms: Temporary Error: unexpected response code: 503
I1026 14:29:43.992424  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2cf2f56b-ed14-41d7-9ca5-7256ecd8988d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:43 GMT]] Body:0xc000b01a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031a140 TLS:<nil>}
I1026 14:29:43.992488  884928 retry.go:31] will retry after 595.544697ms: Temporary Error: unexpected response code: 503
I1026 14:29:44.591210  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[661e0282-6e7e-41e9-a8ab-e303f7e17cb4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:44 GMT]] Body:0xc000864180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031a280 TLS:<nil>}
I1026 14:29:44.591291  884928 retry.go:31] will retry after 853.941486ms: Temporary Error: unexpected response code: 503
I1026 14:29:45.448365  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c4c71b96-1c6d-4005-9f87-37f61440673b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:45 GMT]] Body:0xc00176f380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001672000 TLS:<nil>}
I1026 14:29:45.448442  884928 retry.go:31] will retry after 1.090725664s: Temporary Error: unexpected response code: 503
I1026 14:29:46.543451  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[76bd135d-1012-4eaf-b54b-39154fd41b66] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:46 GMT]] Body:0xc000b01b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be3c0 TLS:<nil>}
I1026 14:29:46.543527  884928 retry.go:31] will retry after 2.30127326s: Temporary Error: unexpected response code: 503
I1026 14:29:48.849452  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bdd87092-53d2-4684-a9ab-aa892874604c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:48 GMT]] Body:0xc00176f480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031a3c0 TLS:<nil>}
I1026 14:29:48.849516  884928 retry.go:31] will retry after 1.640119204s: Temporary Error: unexpected response code: 503
I1026 14:29:50.494355  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ba682d2-1de9-4f3f-a0ac-de44ba589118] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:50 GMT]] Body:0xc0008642c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be500 TLS:<nil>}
I1026 14:29:50.494421  884928 retry.go:31] will retry after 2.879741163s: Temporary Error: unexpected response code: 503
I1026 14:29:53.378091  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[51cb919b-4a9e-49a8-87ed-c5dcc2152013] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:53 GMT]] Body:0xc00176f540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001672140 TLS:<nil>}
I1026 14:29:53.378201  884928 retry.go:31] will retry after 6.330359258s: Temporary Error: unexpected response code: 503
I1026 14:29:59.713626  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1cecf854-eae5-4200-b9ac-c5bdc5bef28c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:29:59 GMT]] Body:0xc000864400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be640 TLS:<nil>}
I1026 14:29:59.713704  884928 retry.go:31] will retry after 5.368138901s: Temporary Error: unexpected response code: 503
I1026 14:30:05.086845  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[19f02cb4-f6db-4852-b995-a0a94cc8c9fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:30:05 GMT]] Body:0xc000864480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be780 TLS:<nil>}
I1026 14:30:05.086919  884928 retry.go:31] will retry after 15.480053352s: Temporary Error: unexpected response code: 503
I1026 14:30:20.573612  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4eb6ed31-cf6e-45aa-9e5e-8df3e255c129] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:30:20 GMT]] Body:0xc00176f680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001672280 TLS:<nil>}
I1026 14:30:20.573681  884928 retry.go:31] will retry after 16.202822264s: Temporary Error: unexpected response code: 503
I1026 14:30:36.780825  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff9d7fca-c238-492b-b432-1f377f081d19] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:30:36 GMT]] Body:0xc000b01c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003be8c0 TLS:<nil>}
I1026 14:30:36.780904  884928 retry.go:31] will retry after 35.233325099s: Temporary Error: unexpected response code: 503
I1026 14:31:12.018299  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5fe6f08c-5259-4f48-b38f-927e203afc0b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:31:12 GMT]] Body:0xc00176f780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00031a500 TLS:<nil>}
I1026 14:31:12.018388  884928 retry.go:31] will retry after 50.21192573s: Temporary Error: unexpected response code: 503
I1026 14:32:02.234281  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6840c77a-1074-4d25-9da6-ab093191d8aa] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:32:02 GMT]] Body:0xc000b00300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016723c0 TLS:<nil>}
I1026 14:32:02.234357  884928 retry.go:31] will retry after 1m26.334837362s: Temporary Error: unexpected response code: 503
I1026 14:33:28.574339  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8496e04e-7197-4130-a932-e4e11adcb3f2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:33:28 GMT]] Body:0xc000b00480 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001672500 TLS:<nil>}
I1026 14:33:28.574435  884928 retry.go:31] will retry after 51.509365471s: Temporary Error: unexpected response code: 503
I1026 14:34:20.089710  884928 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8cd41d7b-c34f-42fe-a511-02cad8b33162] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sun, 26 Oct 2025 14:34:20 GMT]] Body:0xc000b00300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001672640 TLS:<nil>}
I1026 14:34:20.089797  884928 retry.go:31] will retry after 57.536576618s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-656017
helpers_test.go:243: (dbg) docker inspect functional-656017:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51",
	        "Created": "2025-10-26T14:26:15.662564705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 871791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:26:15.695471555Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/hosts",
	        "LogPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51-json.log",
	        "Name": "/functional-656017",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-656017:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-656017",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51",
	                "LowerDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/merged",
	                "UpperDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/diff",
	                "WorkDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-656017",
	                "Source": "/var/lib/docker/volumes/functional-656017/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-656017",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-656017",
	                "name.minikube.sigs.k8s.io": "functional-656017",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0491f8ee0884dffdfb60cf16586bdc089924ef954ce989676a59241184322961",
	            "SandboxKey": "/var/run/docker/netns/0491f8ee0884",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33550"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33549"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-656017": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:57:b7:c3:e1:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "530477eeed5a5cd1e2f2740c0bd7a64c9f8fbcffeceb135f9b5907f3c53af82d",
	                    "EndpointID": "0da32925992ecd2bb8901e4bfaa39ba5c2a59c288a0aa7b60b19bd3c9f7d4c8f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-656017",
	                        "7e6d295c9fb0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-656017 -n functional-656017
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-656017 logs -n 25: (1.314062424s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-656017 --kill=true                                                                                                                                │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ start     │ -p functional-656017 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ start     │ -p functional-656017 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ start     │ -p functional-656017 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-656017 --alsologtostderr -v=1                                                                                                  │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ ssh       │ functional-656017 ssh sudo systemctl is-active docker                                                                                                           │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │                     │
	│ ssh       │ functional-656017 ssh sudo systemctl is-active containerd                                                                                                       │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image load --daemon kicbase/echo-server:functional-656017 --alsologtostderr                                                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image ls                                                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image load --daemon kicbase/echo-server:functional-656017 --alsologtostderr                                                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image ls                                                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image load --daemon kicbase/echo-server:functional-656017 --alsologtostderr                                                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image ls                                                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image save kicbase/echo-server:functional-656017 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image rm kicbase/echo-server:functional-656017 --alsologtostderr                                                                              │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image ls                                                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image     │ functional-656017 image save --daemon kicbase/echo-server:functional-656017 --alsologtostderr                                                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh       │ functional-656017 ssh sudo cat /etc/ssl/certs/845095.pem                                                                                                        │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh       │ functional-656017 ssh sudo cat /usr/share/ca-certificates/845095.pem                                                                                            │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh       │ functional-656017 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh       │ functional-656017 ssh sudo cat /etc/ssl/certs/8450952.pem                                                                                                       │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh       │ functional-656017 ssh sudo cat /usr/share/ca-certificates/8450952.pem                                                                                           │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh       │ functional-656017 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:29:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:29:41.706256  884792 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:29:41.706528  884792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.706536  884792 out.go:374] Setting ErrFile to fd 2...
	I1026 14:29:41.706540  884792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.706726  884792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:29:41.707221  884792 out.go:368] Setting JSON to false
	I1026 14:29:41.708137  884792 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7930,"bootTime":1761481052,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:29:41.708256  884792 start.go:141] virtualization: kvm guest
	I1026 14:29:41.710295  884792 out.go:179] * [functional-656017] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:29:41.711616  884792 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:29:41.711623  884792 notify.go:220] Checking for updates...
	I1026 14:29:41.713376  884792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:29:41.714796  884792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:29:41.716100  884792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:29:41.717345  884792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:29:41.718672  884792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:29:41.720405  884792 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:29:41.720928  884792 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:29:41.745671  884792 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:29:41.745765  884792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.803208  884792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.791510406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.803325  884792 docker.go:318] overlay module found
	I1026 14:29:41.805202  884792 out.go:179] * Using the docker driver based on existing profile
	I1026 14:29:41.806275  884792 start.go:305] selected driver: docker
	I1026 14:29:41.806287  884792 start.go:925] validating driver "docker" against &{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.806380  884792 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:29:41.806469  884792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.862329  884792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.852410907 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.863025  884792 cni.go:84] Creating CNI manager for ""
	I1026 14:29:41.863097  884792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:29:41.863156  884792 start.go:349] cluster config:
	{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.864878  884792 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 26 14:34:36 functional-656017 crio[3608]: time="2025-10-26T14:34:36.851554153Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-656017 found" id=d404d6c6-6c9d-4ab7-8291-b068bf0a1894 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:36 functional-656017 crio[3608]: time="2025-10-26T14:34:36.876357971Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-656017" id=fcda752f-cadb-4667-83ce-50f02bcb0576 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:36 functional-656017 crio[3608]: time="2025-10-26T14:34:36.876505826Z" level=info msg="Image localhost/kicbase/echo-server:functional-656017 not found" id=fcda752f-cadb-4667-83ce-50f02bcb0576 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:36 functional-656017 crio[3608]: time="2025-10-26T14:34:36.876537786Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-656017 found" id=fcda752f-cadb-4667-83ce-50f02bcb0576 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:37 functional-656017 crio[3608]: time="2025-10-26T14:34:37.722997889Z" level=info msg="Checking image status: kicbase/echo-server:functional-656017" id=b1186d4b-a22b-4942-977d-ccd497257077 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:37 functional-656017 crio[3608]: time="2025-10-26T14:34:37.749485591Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-656017" id=bc7b7478-75ca-4e28-8f8f-d75c58ccd053 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:37 functional-656017 crio[3608]: time="2025-10-26T14:34:37.749637463Z" level=info msg="Image docker.io/kicbase/echo-server:functional-656017 not found" id=bc7b7478-75ca-4e28-8f8f-d75c58ccd053 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:37 functional-656017 crio[3608]: time="2025-10-26T14:34:37.749671111Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-656017 found" id=bc7b7478-75ca-4e28-8f8f-d75c58ccd053 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:37 functional-656017 crio[3608]: time="2025-10-26T14:34:37.776685099Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-656017" id=d5c5f03c-dad5-49d3-9bd6-f7e34c040fe2 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:37 functional-656017 crio[3608]: time="2025-10-26T14:34:37.776853253Z" level=info msg="Image localhost/kicbase/echo-server:functional-656017 not found" id=d5c5f03c-dad5-49d3-9bd6-f7e34c040fe2 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:37 functional-656017 crio[3608]: time="2025-10-26T14:34:37.776905813Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-656017 found" id=d5c5f03c-dad5-49d3-9bd6-f7e34c040fe2 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.020044558Z" level=info msg="Checking image status: kicbase/echo-server:functional-656017" id=ecff2ca2-47d7-4c56-a374-aaa15f475e31 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.045960965Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-656017" id=0b758502-a5e9-43e4-a7b0-75317ff48fee name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.046085268Z" level=info msg="Image docker.io/kicbase/echo-server:functional-656017 not found" id=0b758502-a5e9-43e4-a7b0-75317ff48fee name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.046120674Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-656017 found" id=0b758502-a5e9-43e4-a7b0-75317ff48fee name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.072043094Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-656017" id=11f72a26-3ee8-49f4-8c55-e3c6760512cc name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.07218243Z" level=info msg="Image localhost/kicbase/echo-server:functional-656017 not found" id=11f72a26-3ee8-49f4-8c55-e3c6760512cc name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.072214021Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-656017 found" id=11f72a26-3ee8-49f4-8c55-e3c6760512cc name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.845907526Z" level=info msg="Checking image status: kicbase/echo-server:functional-656017" id=f3bee9a2-0107-4718-90f7-404976ae71de name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.871256423Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-656017" id=b9231f45-6f64-4227-b462-acb8d49be451 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.871385731Z" level=info msg="Image docker.io/kicbase/echo-server:functional-656017 not found" id=b9231f45-6f64-4227-b462-acb8d49be451 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.871418231Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-656017 found" id=b9231f45-6f64-4227-b462-acb8d49be451 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.897155185Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-656017" id=59722238-b8ee-4405-a1a8-def5770fc38f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.897333786Z" level=info msg="Image localhost/kicbase/echo-server:functional-656017 not found" id=59722238-b8ee-4405-a1a8-def5770fc38f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:39 functional-656017 crio[3608]: time="2025-10-26T14:34:39.897382532Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-656017 found" id=59722238-b8ee-4405-a1a8-def5770fc38f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d72feea374140       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   5 minutes ago       Exited              mount-munger              0                   321356887ae61       busybox-mount                               default
	ae1aa39570023       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e       6 minutes ago       Running             nginx                     0                   6f2dd0f292d4d       nginx-svc                                   default
	2db865dc7b069       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       2                   e1cf8e1f14fd9       storage-provisioner                         kube-system
	ca2758c3b0747       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   b8dc315566e53       kube-apiserver-functional-656017            kube-system
	9b1b7e8dd2367       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   1                   695d7ad4f4a81       kube-controller-manager-functional-656017   kube-system
	ff718389fd0d5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            1                   ffea8b2f09a06       kube-scheduler-functional-656017            kube-system
	c376e39f3b52c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      1                   65ab1cbe95671       etcd-functional-656017                      kube-system
	dae3a7eefaef6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       1                   e1cf8e1f14fd9       storage-provisioner                         kube-system
	f10fe1a825ece       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Running             coredns                   1                   4b282a34c4d98       coredns-66bc5c9577-fvls7                    kube-system
	ac2a4f4184e61       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Running             kube-proxy                1                   1366df45696e6       kube-proxy-lzmlr                            kube-system
	3a00b00b881c9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      7 minutes ago       Running             kindnet-cni               1                   0a3c75fa75530       kindnet-v9qhm                               kube-system
	21a7b04b3aa18       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   0                   4b282a34c4d98       coredns-66bc5c9577-fvls7                    kube-system
	925d395db73b9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      8 minutes ago       Exited              kindnet-cni               0                   0a3c75fa75530       kindnet-v9qhm                               kube-system
	10f77c7ed3607       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      8 minutes ago       Exited              kube-proxy                0                   1366df45696e6       kube-proxy-lzmlr                            kube-system
	845e9b22f7c9c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      8 minutes ago       Exited              kube-controller-manager   0                   695d7ad4f4a81       kube-controller-manager-functional-656017   kube-system
	d2f21e90bdfb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      8 minutes ago       Exited              kube-scheduler            0                   ffea8b2f09a06       kube-scheduler-functional-656017            kube-system
	de4a79c72b10d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      8 minutes ago       Exited              etcd                      0                   65ab1cbe95671       etcd-functional-656017                      kube-system
	
	
	==> coredns [21a7b04b3aa18668b05a7e74c0c854e868c19d467f7d6cb885dc923426d1175d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40376 - 33371 "HINFO IN 7412649040229934986.2398417335010100422. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.479800978s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10fe1a825ece4e7e5704e2cc7128f0a02fac89b9c51059a6cdd22793ea14365] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52631 - 8649 "HINFO IN 7064989214614263487.4096664645323782067. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.119610551s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-656017
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-656017
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=functional-656017
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_26_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:26:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-656017
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:34:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:34:41 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:34:41 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:34:41 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:34:41 +0000   Sun, 26 Oct 2025 14:26:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-656017
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                48b09da4-51f5-4aad-ba21-72df28aa14f3
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-cnh5r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  default                     hello-node-connect-7d85dfc575-5l852           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  default                     mysql-5bb876957f-7nm86                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     1s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 coredns-66bc5c9577-fvls7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m8s
	  kube-system                 etcd-functional-656017                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m13s
	  kube-system                 kindnet-v9qhm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m8s
	  kube-system                 kube-apiserver-functional-656017              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 kube-controller-manager-functional-656017     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-proxy-lzmlr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-scheduler-functional-656017              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wbqc8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-94hj8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m6s                   kube-proxy       
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  Starting                 8m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m13s                  kubelet          Node functional-656017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m13s                  kubelet          Node functional-656017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m13s                  kubelet          Node functional-656017 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m9s                   node-controller  Node functional-656017 event: Registered Node functional-656017 in Controller
	  Normal  NodeReady                7m57s                  kubelet          Node functional-656017 status is now: NodeReady
	  Normal  Starting                 7m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m11s (x8 over 7m11s)  kubelet          Node functional-656017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m11s (x8 over 7m11s)  kubelet          Node functional-656017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m11s (x8 over 7m11s)  kubelet          Node functional-656017 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m47s                  node-controller  Node functional-656017 event: Registered Node functional-656017 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [c376e39f3b52c0c27a12c20a752c2867f4459d455be0af813e18bb55ac82d433] <==
	{"level":"warn","ts":"2025-10-26T14:27:53.013098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.026639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.032605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.039078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.045231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.052346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.058547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.065475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.072805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.080415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.093382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.107398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.114758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.121179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.128634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.135424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.143287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.150345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.156984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.164095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.183548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.187308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.193611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.200153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.250747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36940","server-name":"","error":"EOF"}
	
	
	==> etcd [de4a79c72b10d2e604ace86f200c4beb93e1fa32406f916e7886b8232029ecce] <==
	{"level":"warn","ts":"2025-10-26T14:26:27.278460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.284638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.291821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.307848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.314269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.320424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.363779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T14:27:30.562035Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T14:27:30.562147Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-656017","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-26T14:27:30.562265Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:27:30.563832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:27:30.563898Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.563920Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-26T14:27:30.563978Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-26T14:27:30.564001Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564001Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564033Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564087Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:27:30.564101Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564070Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:27:30.564126Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.566120Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-26T14:27:30.566210Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.566245Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-26T14:27:30.566255Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-656017","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 14:34:43 up  2:17,  0 user,  load average: 0.29, 0.45, 0.68
	Linux functional-656017 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a00b00b881c9d02ec005d4a24113c1b2ef56dd5b59b8f853a70bead7cfbbe7b] <==
	I1026 14:32:40.518508       1 main.go:301] handling current node
	I1026 14:32:50.515344       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:32:50.515381       1 main.go:301] handling current node
	I1026 14:33:00.513432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:00.513502       1 main.go:301] handling current node
	I1026 14:33:10.512203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:10.512242       1 main.go:301] handling current node
	I1026 14:33:20.512444       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:20.512489       1 main.go:301] handling current node
	I1026 14:33:30.511719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:30.511773       1 main.go:301] handling current node
	I1026 14:33:40.511778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:40.511821       1 main.go:301] handling current node
	I1026 14:33:50.518558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:50.518594       1 main.go:301] handling current node
	I1026 14:34:00.512731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:00.512767       1 main.go:301] handling current node
	I1026 14:34:10.512666       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:10.512708       1 main.go:301] handling current node
	I1026 14:34:20.511962       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:20.512005       1 main.go:301] handling current node
	I1026 14:34:30.512234       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:30.512272       1 main.go:301] handling current node
	I1026 14:34:40.518523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:40.518565       1 main.go:301] handling current node
	
	
	==> kindnet [925d395db73b90f6f8405290b4a9c93369786816f1fce17a08dc90ee359443d4] <==
	I1026 14:26:36.186777       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 14:26:36.187051       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1026 14:26:36.215845       1 main.go:148] setting mtu 1500 for CNI 
	I1026 14:26:36.215876       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 14:26:36.215900       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T14:26:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 14:26:36.417338       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 14:26:36.417362       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 14:26:36.417377       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 14:26:36.417977       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 14:26:36.817566       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 14:26:36.817595       1 metrics.go:72] Registering metrics
	I1026 14:26:36.817681       1 controller.go:711] "Syncing nftables rules"
	I1026 14:26:46.417194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:26:46.417266       1 main.go:301] handling current node
	I1026 14:26:56.417258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:26:56.417297       1 main.go:301] handling current node
	I1026 14:27:06.417463       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:27:06.417500       1 main.go:301] handling current node
	I1026 14:27:16.417114       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:27:16.417154       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca2758c3b0747b23b01630bbf07b4e70b74246e999a371f52426068264bb6eaa] <==
	I1026 14:27:53.701495       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 14:27:53.701772       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 14:27:53.702012       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 14:27:53.702062       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 14:27:53.706550       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 14:27:53.728989       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 14:27:53.735942       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 14:27:54.604728       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1026 14:27:54.811076       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1026 14:27:54.812437       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 14:27:54.817571       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 14:27:55.181493       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 14:27:55.270909       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 14:27:55.279925       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 14:27:55.342794       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 14:27:55.350188       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 14:27:57.331287       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 14:28:20.974427       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.174.82"}
	I1026 14:28:25.530781       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.150.87"}
	I1026 14:28:25.819878       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.185.91"}
	I1026 14:28:26.786816       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.80.146"}
	I1026 14:29:42.711630       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 14:29:42.815661       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.251.128"}
	I1026 14:29:42.828033       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.232.5"}
	I1026 14:34:42.920951       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.97.210"}
	
	
	==> kube-controller-manager [845e9b22f7c9cbbfd8966f1c14a659b863ea4d252c9783d731959d29e933f667] <==
	I1026 14:26:34.754690       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:26:34.754694       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 14:26:34.754715       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:26:34.754731       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 14:26:34.754876       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 14:26:34.754950       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 14:26:34.755053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 14:26:34.755068       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 14:26:34.755106       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 14:26:34.755491       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:26:34.755512       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 14:26:34.755581       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 14:26:34.755590       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:26:34.758063       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 14:26:34.758145       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 14:26:34.758211       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 14:26:34.758222       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 14:26:34.758230       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 14:26:34.760332       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:26:34.763565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:26:34.769710       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-656017" podCIDRs=["10.244.0.0/24"]
	I1026 14:26:34.777273       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 14:26:34.780281       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:26:34.783551       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 14:26:49.706645       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [9b1b7e8dd23671d7230b504e22a6194c4a3dded87c9396dc7558bfcd19bfd0cd] <==
	I1026 14:27:57.021496       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:27:57.023787       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 14:27:57.026170       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 14:27:57.026242       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 14:27:57.026243       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 14:27:57.026443       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:27:57.026555       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:27:57.026577       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 14:27:57.027368       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 14:27:57.027394       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 14:27:57.027445       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 14:27:57.027452       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 14:27:57.027571       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 14:27:57.027828       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 14:27:57.028511       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 14:27:57.028539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 14:27:57.028779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:27:57.030794       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:27:57.042225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 14:29:42.761746       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.767312       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.768859       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.770695       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.772025       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.777667       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [10f77c7ed3607a9f2e9e1b7386954e24cfa2bd9656b621d9976d0cb1df09d688] <==
	I1026 14:26:36.002635       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:26:36.075108       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:26:36.175750       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:26:36.175810       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:26:36.175960       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:26:36.195046       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:26:36.195099       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:26:36.200612       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:26:36.201019       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:26:36.201037       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:26:36.202299       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:26:36.202325       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:26:36.202346       1 config.go:200] "Starting service config controller"
	I1026 14:26:36.202368       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:26:36.202676       1 config.go:309] "Starting node config controller"
	I1026 14:26:36.202774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:26:36.202783       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:26:36.202358       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:26:36.203225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:26:36.302515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:26:36.303744       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 14:26:36.303782       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [ac2a4f4184e61b6f3e212173f5fd287a266fe9854e7e6d92cc6cc308487c717e] <==
	E1026 14:27:20.271682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:21.330105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:23.056459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:28.667874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:50.258783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1026 14:28:11.271153       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:28:11.271219       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:28:11.271337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:28:11.291259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:28:11.291313       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:28:11.297113       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:28:11.297507       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:28:11.297534       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:28:11.298650       1 config.go:200] "Starting service config controller"
	I1026 14:28:11.298673       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:28:11.298713       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:28:11.298733       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:28:11.298740       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:28:11.298771       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:28:11.298847       1 config.go:309] "Starting node config controller"
	I1026 14:28:11.298855       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:28:11.298862       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:28:11.399320       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 14:28:11.399377       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:28:11.399421       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d2f21e90bdfb20b52baadfea12c2c6a9a2d85eb2e69ebba8e079dbf1272e4e5a] <==
	E1026 14:26:27.781998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:26:27.781790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 14:26:27.782054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:26:27.782048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:26:27.782186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:26:27.782196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:26:27.782259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:26:27.782261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:26:27.782256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 14:26:27.782354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:26:28.762282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:26:28.766433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:26:28.770439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:26:28.779033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:26:28.783221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:26:28.788274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:26:28.922828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:26:28.963889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1026 14:26:29.376995       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:30.452222       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:30.452235       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 14:27:30.452275       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 14:27:30.452300       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 14:27:30.452329       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 14:27:30.452358       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff718389fd0d587773fca5861c2ecbbf06d2e55df34a9daf84bc2a88de39e750] <==
	I1026 14:27:52.332052       1 serving.go:386] Generated self-signed cert in-memory
	W1026 14:27:53.620664       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 14:27:53.620791       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 14:27:53.620809       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 14:27:53.620819       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 14:27:53.647470       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 14:27:53.647583       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:27:53.649836       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:53.649877       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:53.650229       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 14:27:53.650262       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 14:27:53.751063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.474835    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.474905    4154 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.475126    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(303ca5dd-0848-4899-89c5-86a1cf327162): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.475210    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="303ca5dd-0848-4899-89c5-86a1cf327162"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.475818    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.475862    4154 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476017    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-cnh5r_default(c9b890f0-43a9-4379-af0d-c767a40fb9a2): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476319    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476584    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476631    4154 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476818    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-5l852_default(712a7bde-503f-4d52-bb5e-f79f7ce120a7): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.477112    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:33:01 functional-656017 kubelet[4154]: E1026 14:33:01.309598    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:33:01 functional-656017 kubelet[4154]: E1026 14:33:01.309951    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="303ca5dd-0848-4899-89c5-86a1cf327162"
	Oct 26 14:33:01 functional-656017 kubelet[4154]: E1026 14:33:01.310030    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:33:13 functional-656017 kubelet[4154]: E1026 14:33:13.309883    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:33:13 functional-656017 kubelet[4154]: E1026 14:33:13.309964    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:33:24 functional-656017 kubelet[4154]: E1026 14:33:24.309616    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:33:27 functional-656017 kubelet[4154]: E1026 14:33:27.309404    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:33:50 functional-656017 kubelet[4154]: E1026 14:33:50.781655    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98
765c"
	Oct 26 14:33:50 functional-656017 kubelet[4154]: E1026 14:33:50.781725    4154 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 26 14:33:50 functional-656017 kubelet[4154]: E1026 14:33:50.783318    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8_kubernetes-dashboard(3cfdea14-a298-4176-999a-892bdf252dfc): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/inc
rease-rate-limit" logger="UnhandledError"
	Oct 26 14:33:50 functional-656017 kubelet[4154]: E1026 14:33:50.783403    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wb
qc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	Oct 26 14:34:03 functional-656017 kubelet[4154]: E1026 14:34:03.311240    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests
: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wbqc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	Oct 26 14:34:43 functional-656017 kubelet[4154]: I1026 14:34:43.065130    4154 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5ll\" (UniqueName: \"kubernetes.io/projected/18816fa5-7b10-470a-ae7a-0f0514bb3485-kube-api-access-gq5ll\") pod \"mysql-5bb876957f-7nm86\" (UID: \"18816fa5-7b10-470a-ae7a-0f0514bb3485\") " pod="default/mysql-5bb876957f-7nm86"
	
	
	==> storage-provisioner [2db865dc7b06999bd7ed228e936a8c3317814c951c195fea0ce4636cd813806f] <==
	W1026 14:34:18.511615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:20.515478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:20.519927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:22.523570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:22.527682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:24.530861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:24.534716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:26.538038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:26.542656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:28.545899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:28.550000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:30.553399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:30.558582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:32.561508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:32.565484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:34.568070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:34.573126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:36.576874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:36.581099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:38.585253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:38.589391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:40.592522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:40.597929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:42.600791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:42.604847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dae3a7eefaef6e45c44cb57f835e13d80ec46cebc496a065528ebef1b3f3dc50] <==
	I1026 14:27:20.179254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 14:27:20.181189       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-656017 -n functional-656017
helpers_test.go:269: (dbg) Run:  kubectl --context functional-656017 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8: exit status 1 (95.643951ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:36 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d72feea3741400a60884b44aace0d50fa00c8a56e531a1e8aeeb4607f039e166
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 26 Oct 2025 14:29:34 +0000
	      Finished:     Sun, 26 Oct 2025 14:29:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xp9j7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xp9j7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m7s   default-scheduler  Successfully assigned default/busybox-mount to functional-656017
	  Normal  Pulling    6m7s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 730ms (57.013s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-cnh5r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ks69p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ks69p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m17s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-cnh5r to functional-656017
	  Warning  Failed     115s (x3 over 6m17s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     115s (x3 over 6m17s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    80s (x5 over 6m16s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     80s (x5 over 6m16s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    69s (x4 over 6m17s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-5l852
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjv8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mjv8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m18s                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5l852 to functional-656017
	  Warning  Failed     115s (x3 over 6m17s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     115s (x3 over 6m17s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    77s (x5 over 6m16s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     77s (x5 over 6m16s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    63s (x4 over 6m18s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-7nm86
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:34:42 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gq5ll (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gq5ll:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/mysql-5bb876957f-7nm86 to functional-656017
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8slwm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-8slwm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m12s                 default-scheduler  Successfully assigned default/sp-pod to functional-656017
	  Warning  Failed     5m11s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     115s (x2 over 5m11s)  kubelet            Error: ErrImagePull
	  Warning  Failed     115s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    103s (x2 over 5m11s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     103s (x2 over 5m11s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    90s (x3 over 6m12s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wbqc8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-94hj8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-656017 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-656017 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-5l852" [712a7bde-503f-4d52-bb5e-f79f7ce120a7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-656017 -n functional-656017
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-26 14:38:26.151770032 +0000 UTC m=+1459.707167410
functional_test.go:1645: (dbg) Run:  kubectl --context functional-656017 describe po hello-node-connect-7d85dfc575-5l852 -n default
functional_test.go:1645: (dbg) kubectl --context functional-656017 describe po hello-node-connect-7d85dfc575-5l852 -n default:
Name:             hello-node-connect-7d85dfc575-5l852
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-656017/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:28:25 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjv8l (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mjv8l:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5l852 to functional-656017
Warning  Failed     108s (x4 over 9m59s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     108s (x4 over 9m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    42s (x10 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     42s (x10 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    27s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-656017 logs hello-node-connect-7d85dfc575-5l852 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-656017 logs hello-node-connect-7d85dfc575-5l852 -n default: exit status 1 (65.998794ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-5l852" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-656017 logs hello-node-connect-7d85dfc575-5l852 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-656017 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-5l852
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-656017/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:28:25 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjv8l (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mjv8l:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5l852 to functional-656017
Warning  Failed     108s (x4 over 9m59s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     108s (x4 over 9m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    42s (x10 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     42s (x10 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    27s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-656017 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-656017 logs -l app=hello-node-connect: exit status 1 (61.92298ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-5l852" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-656017 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-656017 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.185.91
IPs:                      10.111.185.91
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32244/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-656017
helpers_test.go:243: (dbg) docker inspect functional-656017:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51",
	        "Created": "2025-10-26T14:26:15.662564705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 871791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:26:15.695471555Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/hosts",
	        "LogPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51-json.log",
	        "Name": "/functional-656017",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-656017:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-656017",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51",
	                "LowerDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/merged",
	                "UpperDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/diff",
	                "WorkDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-656017",
	                "Source": "/var/lib/docker/volumes/functional-656017/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-656017",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-656017",
	                "name.minikube.sigs.k8s.io": "functional-656017",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0491f8ee0884dffdfb60cf16586bdc089924ef954ce989676a59241184322961",
	            "SandboxKey": "/var/run/docker/netns/0491f8ee0884",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33550"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33549"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-656017": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:57:b7:c3:e1:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "530477eeed5a5cd1e2f2740c0bd7a64c9f8fbcffeceb135f9b5907f3c53af82d",
	                    "EndpointID": "0da32925992ecd2bb8901e4bfaa39ba5c2a59c288a0aa7b60b19bd3c9f7d4c8f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-656017",
	                        "7e6d295c9fb0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-656017 -n functional-656017
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-656017 logs -n 25: (1.270212486s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-656017 image ls                                                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image load --daemon kicbase/echo-server:functional-656017 --alsologtostderr                                                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls                                                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image save kicbase/echo-server:functional-656017 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image rm kicbase/echo-server:functional-656017 --alsologtostderr                                                                              │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls                                                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image save --daemon kicbase/echo-server:functional-656017 --alsologtostderr                                                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/ssl/certs/845095.pem                                                                                                        │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /usr/share/ca-certificates/845095.pem                                                                                            │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/ssl/certs/8450952.pem                                                                                                       │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /usr/share/ca-certificates/8450952.pem                                                                                           │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/test/nested/copy/845095/hosts                                                                                               │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls --format short --alsologtostderr                                                                                                     │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls --format yaml --alsologtostderr                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh pgrep buildkitd                                                                                                                           │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │                     │
	│ image          │ functional-656017 image build -t localhost/my-image:functional-656017 testdata/build --alsologtostderr                                                          │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls                                                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls --format json --alsologtostderr                                                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls --format table --alsologtostderr                                                                                                     │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ update-context │ functional-656017 update-context --alsologtostderr -v=2                                                                                                         │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ update-context │ functional-656017 update-context --alsologtostderr -v=2                                                                                                         │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ update-context │ functional-656017 update-context --alsologtostderr -v=2                                                                                                         │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:29:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:29:41.706256  884792 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:29:41.706528  884792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.706536  884792 out.go:374] Setting ErrFile to fd 2...
	I1026 14:29:41.706540  884792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.706726  884792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:29:41.707221  884792 out.go:368] Setting JSON to false
	I1026 14:29:41.708137  884792 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7930,"bootTime":1761481052,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:29:41.708256  884792 start.go:141] virtualization: kvm guest
	I1026 14:29:41.710295  884792 out.go:179] * [functional-656017] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:29:41.711616  884792 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:29:41.711623  884792 notify.go:220] Checking for updates...
	I1026 14:29:41.713376  884792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:29:41.714796  884792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:29:41.716100  884792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:29:41.717345  884792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:29:41.718672  884792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:29:41.720405  884792 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:29:41.720928  884792 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:29:41.745671  884792 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:29:41.745765  884792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.803208  884792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.791510406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.803325  884792 docker.go:318] overlay module found
	I1026 14:29:41.805202  884792 out.go:179] * Using the docker driver based on existing profile
	I1026 14:29:41.806275  884792 start.go:305] selected driver: docker
	I1026 14:29:41.806287  884792 start.go:925] validating driver "docker" against &{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.806380  884792 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:29:41.806469  884792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.862329  884792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.852410907 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.863025  884792 cni.go:84] Creating CNI manager for ""
	I1026 14:29:41.863097  884792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:29:41.863156  884792 start.go:349] cluster config:
	{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.864878  884792 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 26 14:35:22 functional-656017 crio[3608]: time="2025-10-26T14:35:22.52893151Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:35:36 functional-656017 crio[3608]: time="2025-10-26T14:35:36.309915579Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=5fdd1d29-473b-4af4-9081-3ba9113ace00 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:36 functional-656017 crio[3608]: time="2025-10-26T14:35:36.310208191Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=5fdd1d29-473b-4af4-9081-3ba9113ace00 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:36 functional-656017 crio[3608]: time="2025-10-26T14:35:36.310265124Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=5fdd1d29-473b-4af4-9081-3ba9113ace00 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:51 functional-656017 crio[3608]: time="2025-10-26T14:35:51.310156091Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=52519714-e3f2-4aa0-8aee-0ff545fd8b50 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:51 functional-656017 crio[3608]: time="2025-10-26T14:35:51.310372422Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=52519714-e3f2-4aa0-8aee-0ff545fd8b50 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:51 functional-656017 crio[3608]: time="2025-10-26T14:35:51.310418628Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=52519714-e3f2-4aa0-8aee-0ff545fd8b50 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:35:53 functional-656017 crio[3608]: time="2025-10-26T14:35:53.168269456Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:36:07 functional-656017 crio[3608]: time="2025-10-26T14:36:07.758234424Z" level=info msg="Trying to access \"docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8\""
	Oct 26 14:36:38 functional-656017 crio[3608]: time="2025-10-26T14:36:38.40975996Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=32047e6d-3811-46b2-9138-be5349b175b7 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:36:38 functional-656017 crio[3608]: time="2025-10-26T14:36:38.41073529Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ea58478a-8383-4f8d-bd8f-f75368846e20 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:36:38 functional-656017 crio[3608]: time="2025-10-26T14:36:38.411497329Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=24dbbae8-b8e9-467a-a746-0c307dcbd74b name=/runtime.v1.ImageService/PullImage
	Oct 26 14:36:38 functional-656017 crio[3608]: time="2025-10-26T14:36:38.427947821Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 26 14:37:09 functional-656017 crio[3608]: time="2025-10-26T14:37:09.081061078Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 26 14:37:39 functional-656017 crio[3608]: time="2025-10-26T14:37:39.733769692Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=aa4b445c-b3ec-4715-adc4-fda2a97c0a04 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:37:39 functional-656017 crio[3608]: time="2025-10-26T14:37:39.73762891Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Oct 26 14:37:51 functional-656017 crio[3608]: time="2025-10-26T14:37:51.30953154Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=38fd4510-05b0-4e7f-9d4e-24b7fc2a6d3f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:37:51 functional-656017 crio[3608]: time="2025-10-26T14:37:51.309749162Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=38fd4510-05b0-4e7f-9d4e-24b7fc2a6d3f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:37:51 functional-656017 crio[3608]: time="2025-10-26T14:37:51.309832034Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=38fd4510-05b0-4e7f-9d4e-24b7fc2a6d3f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:38:05 functional-656017 crio[3608]: time="2025-10-26T14:38:05.310239871Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=7552c174-9375-4635-aef5-45e5e16d52be name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:38:05 functional-656017 crio[3608]: time="2025-10-26T14:38:05.310495188Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=7552c174-9375-4635-aef5-45e5e16d52be name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:38:05 functional-656017 crio[3608]: time="2025-10-26T14:38:05.310552978Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=7552c174-9375-4635-aef5-45e5e16d52be name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:38:19 functional-656017 crio[3608]: time="2025-10-26T14:38:19.3100006Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=050cf1ce-5f35-49c2-9f1f-b6fba01f9d27 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:38:19 functional-656017 crio[3608]: time="2025-10-26T14:38:19.310197004Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=050cf1ce-5f35-49c2-9f1f-b6fba01f9d27 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:38:19 functional-656017 crio[3608]: time="2025-10-26T14:38:19.310241039Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=050cf1ce-5f35-49c2-9f1f-b6fba01f9d27 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d72feea374140       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 minutes ago       Exited              mount-munger              0                   321356887ae61       busybox-mount                               default
	ae1aa39570023       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e       9 minutes ago       Running             nginx                     0                   6f2dd0f292d4d       nginx-svc                                   default
	2db865dc7b069       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       2                   e1cf8e1f14fd9       storage-provisioner                         kube-system
	ca2758c3b0747       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   b8dc315566e53       kube-apiserver-functional-656017            kube-system
	9b1b7e8dd2367       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   1                   695d7ad4f4a81       kube-controller-manager-functional-656017   kube-system
	ff718389fd0d5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            1                   ffea8b2f09a06       kube-scheduler-functional-656017            kube-system
	c376e39f3b52c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      1                   65ab1cbe95671       etcd-functional-656017                      kube-system
	dae3a7eefaef6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       1                   e1cf8e1f14fd9       storage-provisioner                         kube-system
	f10fe1a825ece       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Running             coredns                   1                   4b282a34c4d98       coredns-66bc5c9577-fvls7                    kube-system
	ac2a4f4184e61       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Running             kube-proxy                1                   1366df45696e6       kube-proxy-lzmlr                            kube-system
	3a00b00b881c9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Running             kindnet-cni               1                   0a3c75fa75530       kindnet-v9qhm                               kube-system
	21a7b04b3aa18       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   4b282a34c4d98       coredns-66bc5c9577-fvls7                    kube-system
	925d395db73b9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      11 minutes ago      Exited              kindnet-cni               0                   0a3c75fa75530       kindnet-v9qhm                               kube-system
	10f77c7ed3607       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Exited              kube-proxy                0                   1366df45696e6       kube-proxy-lzmlr                            kube-system
	845e9b22f7c9c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      12 minutes ago      Exited              kube-controller-manager   0                   695d7ad4f4a81       kube-controller-manager-functional-656017   kube-system
	d2f21e90bdfb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      12 minutes ago      Exited              kube-scheduler            0                   ffea8b2f09a06       kube-scheduler-functional-656017            kube-system
	de4a79c72b10d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      12 minutes ago      Exited              etcd                      0                   65ab1cbe95671       etcd-functional-656017                      kube-system
	
	
	==> coredns [21a7b04b3aa18668b05a7e74c0c854e868c19d467f7d6cb885dc923426d1175d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40376 - 33371 "HINFO IN 7412649040229934986.2398417335010100422. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.479800978s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10fe1a825ece4e7e5704e2cc7128f0a02fac89b9c51059a6cdd22793ea14365] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52631 - 8649 "HINFO IN 7064989214614263487.4096664645323782067. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.119610551s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-656017
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-656017
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=functional-656017
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_26_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:26:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-656017
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:38:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:36:53 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:36:53 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:36:53 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:36:53 +0000   Sun, 26 Oct 2025 14:26:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-656017
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                48b09da4-51f5-4aad-ba21-72df28aa14f3
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-cnh5r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-5l852           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-7nm86                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     3m45s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-66bc5c9577-fvls7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-656017                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-v9qhm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-656017              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-656017     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-lzmlr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-656017              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wbqc8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-94hj8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-656017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-656017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-656017 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node functional-656017 event: Registered Node functional-656017 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-656017 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-656017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-656017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-656017 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-656017 event: Registered Node functional-656017 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [c376e39f3b52c0c27a12c20a752c2867f4459d455be0af813e18bb55ac82d433] <==
	{"level":"warn","ts":"2025-10-26T14:27:53.039078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.045231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.052346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.058547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.065475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.072805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.080415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.093382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.107398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.114758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.121179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.128634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.135424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.143287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.150345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.156984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.164095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.183548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.187308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.193611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.200153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.250747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36940","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T14:37:52.741358Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":998}
	{"level":"info","ts":"2025-10-26T14:37:52.750548Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":998,"took":"8.174187ms","hash":1610504942,"current-db-size-bytes":3313664,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":3313664,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2025-10-26T14:37:52.750610Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1610504942,"revision":998,"compact-revision":-1}
	
	
	==> etcd [de4a79c72b10d2e604ace86f200c4beb93e1fa32406f916e7886b8232029ecce] <==
	{"level":"warn","ts":"2025-10-26T14:26:27.278460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.284638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.291821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.307848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.314269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.320424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.363779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T14:27:30.562035Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T14:27:30.562147Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-656017","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-26T14:27:30.562265Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:27:30.563832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:27:30.563898Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.563920Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-26T14:27:30.563978Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-26T14:27:30.564001Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564001Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564033Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564087Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:27:30.564101Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564070Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:27:30.564126Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.566120Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-26T14:27:30.566210Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.566245Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-26T14:27:30.566255Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-656017","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 14:38:27 up  2:20,  0 user,  load average: 0.07, 0.24, 0.54
	Linux functional-656017 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a00b00b881c9d02ec005d4a24113c1b2ef56dd5b59b8f853a70bead7cfbbe7b] <==
	I1026 14:36:20.512637       1 main.go:301] handling current node
	I1026 14:36:30.521486       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:36:30.521528       1 main.go:301] handling current node
	I1026 14:36:40.518413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:36:40.518446       1 main.go:301] handling current node
	I1026 14:36:50.512713       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:36:50.512797       1 main.go:301] handling current node
	I1026 14:37:00.514157       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:37:00.514238       1 main.go:301] handling current node
	I1026 14:37:10.512730       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:37:10.512776       1 main.go:301] handling current node
	I1026 14:37:20.517964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:37:20.517996       1 main.go:301] handling current node
	I1026 14:37:30.521404       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:37:30.521448       1 main.go:301] handling current node
	I1026 14:37:40.514933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:37:40.514972       1 main.go:301] handling current node
	I1026 14:37:50.513283       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:37:50.513329       1 main.go:301] handling current node
	I1026 14:38:00.512711       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:38:00.512743       1 main.go:301] handling current node
	I1026 14:38:10.512673       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:38:10.512726       1 main.go:301] handling current node
	I1026 14:38:20.512583       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:38:20.512622       1 main.go:301] handling current node
	
	
	==> kindnet [925d395db73b90f6f8405290b4a9c93369786816f1fce17a08dc90ee359443d4] <==
	I1026 14:26:36.186777       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 14:26:36.187051       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1026 14:26:36.215845       1 main.go:148] setting mtu 1500 for CNI 
	I1026 14:26:36.215876       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 14:26:36.215900       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T14:26:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 14:26:36.417338       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 14:26:36.417362       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 14:26:36.417377       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 14:26:36.417977       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 14:26:36.817566       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 14:26:36.817595       1 metrics.go:72] Registering metrics
	I1026 14:26:36.817681       1 controller.go:711] "Syncing nftables rules"
	I1026 14:26:46.417194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:26:46.417266       1 main.go:301] handling current node
	I1026 14:26:56.417258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:26:56.417297       1 main.go:301] handling current node
	I1026 14:27:06.417463       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:27:06.417500       1 main.go:301] handling current node
	I1026 14:27:16.417114       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:27:16.417154       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca2758c3b0747b23b01630bbf07b4e70b74246e999a371f52426068264bb6eaa] <==
	I1026 14:27:53.701772       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 14:27:53.702012       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 14:27:53.702062       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 14:27:53.706550       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 14:27:53.728989       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 14:27:53.735942       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 14:27:54.604728       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1026 14:27:54.811076       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1026 14:27:54.812437       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 14:27:54.817571       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 14:27:55.181493       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 14:27:55.270909       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 14:27:55.279925       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 14:27:55.342794       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 14:27:55.350188       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 14:27:57.331287       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 14:28:20.974427       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.174.82"}
	I1026 14:28:25.530781       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.150.87"}
	I1026 14:28:25.819878       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.185.91"}
	I1026 14:28:26.786816       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.80.146"}
	I1026 14:29:42.711630       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 14:29:42.815661       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.251.128"}
	I1026 14:29:42.828033       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.232.5"}
	I1026 14:34:42.920951       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.97.210"}
	I1026 14:37:53.625269       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [845e9b22f7c9cbbfd8966f1c14a659b863ea4d252c9783d731959d29e933f667] <==
	I1026 14:26:34.754690       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:26:34.754694       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 14:26:34.754715       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:26:34.754731       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 14:26:34.754876       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 14:26:34.754950       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 14:26:34.755053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 14:26:34.755068       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 14:26:34.755106       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 14:26:34.755491       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:26:34.755512       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 14:26:34.755581       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 14:26:34.755590       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:26:34.758063       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 14:26:34.758145       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 14:26:34.758211       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 14:26:34.758222       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 14:26:34.758230       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 14:26:34.760332       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:26:34.763565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:26:34.769710       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-656017" podCIDRs=["10.244.0.0/24"]
	I1026 14:26:34.777273       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 14:26:34.780281       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:26:34.783551       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 14:26:49.706645       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [9b1b7e8dd23671d7230b504e22a6194c4a3dded87c9396dc7558bfcd19bfd0cd] <==
	I1026 14:27:57.021496       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:27:57.023787       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 14:27:57.026170       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 14:27:57.026242       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 14:27:57.026243       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 14:27:57.026443       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:27:57.026555       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:27:57.026577       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 14:27:57.027368       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 14:27:57.027394       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 14:27:57.027445       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 14:27:57.027452       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 14:27:57.027571       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 14:27:57.027828       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 14:27:57.028511       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 14:27:57.028539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 14:27:57.028779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:27:57.030794       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:27:57.042225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 14:29:42.761746       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.767312       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.768859       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.770695       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.772025       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.777667       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [10f77c7ed3607a9f2e9e1b7386954e24cfa2bd9656b621d9976d0cb1df09d688] <==
	I1026 14:26:36.002635       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:26:36.075108       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:26:36.175750       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:26:36.175810       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:26:36.175960       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:26:36.195046       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:26:36.195099       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:26:36.200612       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:26:36.201019       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:26:36.201037       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:26:36.202299       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:26:36.202325       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:26:36.202346       1 config.go:200] "Starting service config controller"
	I1026 14:26:36.202368       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:26:36.202676       1 config.go:309] "Starting node config controller"
	I1026 14:26:36.202774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:26:36.202783       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:26:36.202358       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:26:36.203225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:26:36.302515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:26:36.303744       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 14:26:36.303782       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [ac2a4f4184e61b6f3e212173f5fd287a266fe9854e7e6d92cc6cc308487c717e] <==
	E1026 14:27:20.271682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:21.330105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:23.056459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:28.667874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:50.258783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1026 14:28:11.271153       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:28:11.271219       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:28:11.271337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:28:11.291259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:28:11.291313       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:28:11.297113       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:28:11.297507       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:28:11.297534       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:28:11.298650       1 config.go:200] "Starting service config controller"
	I1026 14:28:11.298673       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:28:11.298713       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:28:11.298733       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:28:11.298740       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:28:11.298771       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:28:11.298847       1 config.go:309] "Starting node config controller"
	I1026 14:28:11.298855       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:28:11.298862       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:28:11.399320       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 14:28:11.399377       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:28:11.399421       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d2f21e90bdfb20b52baadfea12c2c6a9a2d85eb2e69ebba8e079dbf1272e4e5a] <==
	E1026 14:26:27.781998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:26:27.781790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 14:26:27.782054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:26:27.782048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:26:27.782186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:26:27.782196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:26:27.782259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:26:27.782261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:26:27.782256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 14:26:27.782354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:26:28.762282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:26:28.766433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:26:28.770439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:26:28.779033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:26:28.783221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:26:28.788274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:26:28.922828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:26:28.963889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1026 14:26:29.376995       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:30.452222       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:30.452235       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 14:27:30.452275       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 14:27:30.452300       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 14:27:30.452329       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 14:27:30.452358       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff718389fd0d587773fca5861c2ecbbf06d2e55df34a9daf84bc2a88de39e750] <==
	I1026 14:27:52.332052       1 serving.go:386] Generated self-signed cert in-memory
	W1026 14:27:53.620664       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 14:27:53.620791       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 14:27:53.620809       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 14:27:53.620819       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 14:27:53.647470       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 14:27:53.647583       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:27:53.649836       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:53.649877       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:53.650229       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 14:27:53.650262       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 14:27:53.751063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:36:38 functional-656017 kubelet[4154]: E1026 14:36:38.411072    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:36:38 functional-656017 kubelet[4154]: E1026 14:36:38.411115    4154 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:36:38 functional-656017 kubelet[4154]: E1026 14:36:38.411253    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-5l852_default(712a7bde-503f-4d52-bb5e-f79f7ce120a7): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 26 14:36:38 functional-656017 kubelet[4154]: E1026 14:36:38.412607    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:36:52 functional-656017 kubelet[4154]: E1026 14:36:52.309841    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:36:53 functional-656017 kubelet[4154]: E1026 14:36:53.309227    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="303ca5dd-0848-4899-89c5-86a1cf327162"
	Oct 26 14:36:53 functional-656017 kubelet[4154]: E1026 14:36:53.309233    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:37:05 functional-656017 kubelet[4154]: E1026 14:37:05.309996    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:37:07 functional-656017 kubelet[4154]: E1026 14:37:07.309824    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="303ca5dd-0848-4899-89c5-86a1cf327162"
	Oct 26 14:37:08 functional-656017 kubelet[4154]: E1026 14:37:08.309210    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:37:18 functional-656017 kubelet[4154]: E1026 14:37:18.310049    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:37:18 functional-656017 kubelet[4154]: E1026 14:37:18.310205    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="303ca5dd-0848-4899-89c5-86a1cf327162"
	Oct 26 14:37:21 functional-656017 kubelet[4154]: E1026 14:37:21.309676    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:37:31 functional-656017 kubelet[4154]: E1026 14:37:31.309496    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:37:33 functional-656017 kubelet[4154]: E1026 14:37:33.309400    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:37:39 functional-656017 kubelet[4154]: E1026 14:37:39.733257    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98
765c"
	Oct 26 14:37:39 functional-656017 kubelet[4154]: E1026 14:37:39.733328    4154 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 26 14:37:39 functional-656017 kubelet[4154]: E1026 14:37:39.733554    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8_kubernetes-dashboard(3cfdea14-a298-4176-999a-892bdf252dfc): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/inc
rease-rate-limit" logger="UnhandledError"
	Oct 26 14:37:39 functional-656017 kubelet[4154]: E1026 14:37:39.733630    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wb
qc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	Oct 26 14:37:42 functional-656017 kubelet[4154]: E1026 14:37:42.309752    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:37:44 functional-656017 kubelet[4154]: E1026 14:37:44.309785    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:37:51 functional-656017 kubelet[4154]: E1026 14:37:51.310268    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests
: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wbqc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	Oct 26 14:37:56 functional-656017 kubelet[4154]: E1026 14:37:56.310068    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:38:05 functional-656017 kubelet[4154]: E1026 14:38:05.310971    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests
: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wbqc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	Oct 26 14:38:19 functional-656017 kubelet[4154]: E1026 14:38:19.310596    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests
: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wbqc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	
	
	==> storage-provisioner [2db865dc7b06999bd7ed228e936a8c3317814c951c195fea0ce4636cd813806f] <==
	W1026 14:38:03.386100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:05.389470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:05.393569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:07.397024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:07.401004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:09.404556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:09.409292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:11.412614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:11.416787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:13.420029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:13.424302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:15.427569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:15.432516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:17.435911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:17.440784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:19.443706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:19.447609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:21.451081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:21.454983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:23.458077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:23.461775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:25.464563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:25.469304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:27.472334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:38:27.475935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dae3a7eefaef6e45c44cb57f835e13d80ec46cebc496a065528ebef1b3f3dc50] <==
	I1026 14:27:20.179254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 14:27:20.181189       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-656017 -n functional-656017
helpers_test.go:269: (dbg) Run:  kubectl --context functional-656017 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8: exit status 1 (93.517063ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:36 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d72feea3741400a60884b44aace0d50fa00c8a56e531a1e8aeeb4607f039e166
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 26 Oct 2025 14:29:34 +0000
	      Finished:     Sun, 26 Oct 2025 14:29:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xp9j7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xp9j7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-656017
	  Normal  Pulling    9m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8m54s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 730ms (57.013s including waiting). Image size: 4631262 bytes.
	  Normal  Created    8m54s  kubelet            Created container: mount-munger
	  Normal  Started    8m54s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-cnh5r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ks69p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ks69p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-cnh5r to functional-656017
	  Warning  Failed     110s (x4 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     110s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    32s (x11 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     32s (x11 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    19s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-5l852
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjv8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mjv8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5l852 to functional-656017
	  Warning  Failed     110s (x4 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     110s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    44s (x10 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     44s (x10 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    29s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-7nm86
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:34:42 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gq5ll (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gq5ll:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m45s  default-scheduler  Successfully assigned default/mysql-5bb876957f-7nm86 to functional-656017
	  Normal  Pulling    3m45s  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8slwm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-8slwm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m56s                 default-scheduler  Successfully assigned default/sp-pod to functional-656017
	  Warning  Failed     8m55s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     110s (x3 over 8m55s)  kubelet            Error: ErrImagePull
	  Warning  Failed     110s (x2 over 5m39s)  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    70s (x5 over 8m55s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     70s (x5 over 8m55s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    59s (x4 over 9m56s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wbqc8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-94hj8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.90s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f4ba91c4-85bf-49c5-af02-b213cc930e16] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003622025s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-656017 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-656017 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-656017 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-656017 apply -f testdata/storage-provisioner/pod.yaml
I1026 14:28:31.731929  845095 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [303ca5dd-0848-4899-89c5-86a1cf327162] Pending
helpers_test.go:352: "sp-pod" [303ca5dd-0848-4899-89c5-86a1cf327162] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-656017 -n functional-656017
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-26 14:34:32.06941883 +0000 UTC m=+1225.624816221
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-656017 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-656017 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-656017/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:28:31 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8slwm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-8slwm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/sp-pod to functional-656017
Warning  Failed     4m59s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     103s (x2 over 4m59s)  kubelet            Error: ErrImagePull
Warning  Failed     103s                  kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    91s (x2 over 4m59s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     91s (x2 over 4m59s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    78s (x3 over 6m)      kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-656017 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-656017 logs sp-pod -n default: exit status 1 (68.730457ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-656017 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-656017
helpers_test.go:243: (dbg) docker inspect functional-656017:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51",
	        "Created": "2025-10-26T14:26:15.662564705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 871791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:26:15.695471555Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/hosts",
	        "LogPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51-json.log",
	        "Name": "/functional-656017",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-656017:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-656017",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51",
	                "LowerDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/merged",
	                "UpperDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/diff",
	                "WorkDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-656017",
	                "Source": "/var/lib/docker/volumes/functional-656017/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-656017",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-656017",
	                "name.minikube.sigs.k8s.io": "functional-656017",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0491f8ee0884dffdfb60cf16586bdc089924ef954ce989676a59241184322961",
	            "SandboxKey": "/var/run/docker/netns/0491f8ee0884",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33550"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33549"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-656017": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:57:b7:c3:e1:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "530477eeed5a5cd1e2f2740c0bd7a64c9f8fbcffeceb135f9b5907f3c53af82d",
	                    "EndpointID": "0da32925992ecd2bb8901e4bfaa39ba5c2a59c288a0aa7b60b19bd3c9f7d4c8f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-656017",
	                        "7e6d295c9fb0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-656017 -n functional-656017
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-656017 logs -n 25: (1.260426662s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-656017 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:28 UTC │                     │
	│ mount     │ -p functional-656017 /tmp/TestFunctionalparallelMountCmdany-port2342765656/001:/mount-9p --alsologtostderr -v=1                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:28 UTC │                     │
	│ ssh       │ functional-656017 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:28 UTC │ 26 Oct 25 14:28 UTC │
	│ ssh       │ functional-656017 ssh -- ls -la /mount-9p                                                                                         │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:28 UTC │ 26 Oct 25 14:28 UTC │
	│ ssh       │ functional-656017 ssh cat /mount-9p/test-1761488914927877726                                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:28 UTC │ 26 Oct 25 14:28 UTC │
	│ ssh       │ functional-656017 ssh stat /mount-9p/created-by-test                                                                              │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │ 26 Oct 25 14:29 UTC │
	│ ssh       │ functional-656017 ssh stat /mount-9p/created-by-pod                                                                               │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │ 26 Oct 25 14:29 UTC │
	│ ssh       │ functional-656017 ssh sudo umount -f /mount-9p                                                                                    │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │ 26 Oct 25 14:29 UTC │
	│ ssh       │ functional-656017 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ mount     │ -p functional-656017 /tmp/TestFunctionalparallelMountCmdspecific-port3903809018/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ ssh       │ functional-656017 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │ 26 Oct 25 14:29 UTC │
	│ ssh       │ functional-656017 ssh -- ls -la /mount-9p                                                                                         │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │ 26 Oct 25 14:29 UTC │
	│ ssh       │ functional-656017 ssh sudo umount -f /mount-9p                                                                                    │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ mount     │ -p functional-656017 /tmp/TestFunctionalparallelMountCmdVerifyCleanup688021677/001:/mount3 --alsologtostderr -v=1                 │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ mount     │ -p functional-656017 /tmp/TestFunctionalparallelMountCmdVerifyCleanup688021677/001:/mount1 --alsologtostderr -v=1                 │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ mount     │ -p functional-656017 /tmp/TestFunctionalparallelMountCmdVerifyCleanup688021677/001:/mount2 --alsologtostderr -v=1                 │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ ssh       │ functional-656017 ssh findmnt -T /mount1                                                                                          │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ ssh       │ functional-656017 ssh findmnt -T /mount1                                                                                          │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │ 26 Oct 25 14:29 UTC │
	│ ssh       │ functional-656017 ssh findmnt -T /mount2                                                                                          │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │ 26 Oct 25 14:29 UTC │
	│ ssh       │ functional-656017 ssh findmnt -T /mount3                                                                                          │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │ 26 Oct 25 14:29 UTC │
	│ mount     │ -p functional-656017 --kill=true                                                                                                  │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ start     │ -p functional-656017 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ start     │ -p functional-656017 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ start     │ -p functional-656017 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-656017 --alsologtostderr -v=1                                                                    │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:29 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:29:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:29:41.706256  884792 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:29:41.706528  884792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.706536  884792 out.go:374] Setting ErrFile to fd 2...
	I1026 14:29:41.706540  884792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.706726  884792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:29:41.707221  884792 out.go:368] Setting JSON to false
	I1026 14:29:41.708137  884792 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7930,"bootTime":1761481052,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:29:41.708256  884792 start.go:141] virtualization: kvm guest
	I1026 14:29:41.710295  884792 out.go:179] * [functional-656017] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:29:41.711616  884792 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:29:41.711623  884792 notify.go:220] Checking for updates...
	I1026 14:29:41.713376  884792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:29:41.714796  884792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:29:41.716100  884792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:29:41.717345  884792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:29:41.718672  884792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:29:41.720405  884792 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:29:41.720928  884792 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:29:41.745671  884792 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:29:41.745765  884792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.803208  884792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.791510406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.803325  884792 docker.go:318] overlay module found
	I1026 14:29:41.805202  884792 out.go:179] * Using the docker driver based on existing profile
	I1026 14:29:41.806275  884792 start.go:305] selected driver: docker
	I1026 14:29:41.806287  884792 start.go:925] validating driver "docker" against &{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.806380  884792 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:29:41.806469  884792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.862329  884792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.852410907 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.863025  884792 cni.go:84] Creating CNI manager for ""
	I1026 14:29:41.863097  884792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:29:41.863156  884792 start.go:349] cluster config:
	{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.864878  884792 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 26 14:30:30 functional-656017 crio[3608]: time="2025-10-26T14:30:30.310535731Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=102f7bff-5866-4d8b-a7fd-ffee646ea61b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:30:46 functional-656017 crio[3608]: time="2025-10-26T14:30:46.822806902Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 26 14:31:17 functional-656017 crio[3608]: time="2025-10-26T14:31:17.470100725Z" level=info msg="Pulling image: docker.io/nginx:latest" id=c7711a6c-54a5-4091-829e-6b63c771eb95 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:31:17 functional-656017 crio[3608]: time="2025-10-26T14:31:17.484919946Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:31:17 functional-656017 crio[3608]: time="2025-10-26T14:31:17.939986114Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=617f3bec-123c-435a-a47c-a0641180ceb4 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:31:17 functional-656017 crio[3608]: time="2025-10-26T14:31:17.940234687Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=617f3bec-123c-435a-a47c-a0641180ceb4 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:31:17 functional-656017 crio[3608]: time="2025-10-26T14:31:17.940295839Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=617f3bec-123c-435a-a47c-a0641180ceb4 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:31:32 functional-656017 crio[3608]: time="2025-10-26T14:31:32.309892946Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=da0e4153-b72f-4afa-a985-b5e48101448d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:31:32 functional-656017 crio[3608]: time="2025-10-26T14:31:32.310121813Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=da0e4153-b72f-4afa-a985-b5e48101448d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:31:32 functional-656017 crio[3608]: time="2025-10-26T14:31:32.310203436Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=da0e4153-b72f-4afa-a985-b5e48101448d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:31:48 functional-656017 crio[3608]: time="2025-10-26T14:31:48.134279992Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:32:18 functional-656017 crio[3608]: time="2025-10-26T14:32:18.826631408Z" level=info msg="Trying to access \"docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8\""
	Oct 26 14:32:49 functional-656017 crio[3608]: time="2025-10-26T14:32:49.475413982Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c6c67fc8-744c-4afe-98ab-1746d97f2bf3 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:32:49 functional-656017 crio[3608]: time="2025-10-26T14:32:49.476283304Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b651e22f-ba89-41f2-9841-e5509f443f20 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:32:49 functional-656017 crio[3608]: time="2025-10-26T14:32:49.477086313Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=acbabdeb-a410-4bee-a2e8-a84465bd719e name=/runtime.v1.ImageService/PullImage
	Oct 26 14:32:49 functional-656017 crio[3608]: time="2025-10-26T14:32:49.48158178Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 26 14:33:20 functional-656017 crio[3608]: time="2025-10-26T14:33:20.130535044Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 26 14:33:50 functional-656017 crio[3608]: time="2025-10-26T14:33:50.782315691Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2c3d9041-f844-4431-9d86-03303d152ab5 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:33:50 functional-656017 crio[3608]: time="2025-10-26T14:33:50.787687907Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 26 14:34:03 functional-656017 crio[3608]: time="2025-10-26T14:34:03.310500581Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=2972b789-3678-4496-a6c5-a21c382f355f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:03 functional-656017 crio[3608]: time="2025-10-26T14:34:03.310703049Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=2972b789-3678-4496-a6c5-a21c382f355f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:03 functional-656017 crio[3608]: time="2025-10-26T14:34:03.310740475Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=2972b789-3678-4496-a6c5-a21c382f355f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:18 functional-656017 crio[3608]: time="2025-10-26T14:34:18.310223328Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=d959a927-cd40-48a8-a611-3da3b7c204e7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:18 functional-656017 crio[3608]: time="2025-10-26T14:34:18.310422226Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=d959a927-cd40-48a8-a611-3da3b7c204e7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:34:18 functional-656017 crio[3608]: time="2025-10-26T14:34:18.310482908Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=d959a927-cd40-48a8-a611-3da3b7c204e7 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d72feea374140       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   4 minutes ago       Exited              mount-munger              0                   321356887ae61       busybox-mount                               default
	ae1aa39570023       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e       6 minutes ago       Running             nginx                     0                   6f2dd0f292d4d       nginx-svc                                   default
	2db865dc7b069       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       2                   e1cf8e1f14fd9       storage-provisioner                         kube-system
	ca2758c3b0747       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   b8dc315566e53       kube-apiserver-functional-656017            kube-system
	9b1b7e8dd2367       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   1                   695d7ad4f4a81       kube-controller-manager-functional-656017   kube-system
	ff718389fd0d5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            1                   ffea8b2f09a06       kube-scheduler-functional-656017            kube-system
	c376e39f3b52c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      1                   65ab1cbe95671       etcd-functional-656017                      kube-system
	dae3a7eefaef6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       1                   e1cf8e1f14fd9       storage-provisioner                         kube-system
	f10fe1a825ece       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Running             coredns                   1                   4b282a34c4d98       coredns-66bc5c9577-fvls7                    kube-system
	ac2a4f4184e61       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Running             kube-proxy                1                   1366df45696e6       kube-proxy-lzmlr                            kube-system
	3a00b00b881c9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      7 minutes ago       Running             kindnet-cni               1                   0a3c75fa75530       kindnet-v9qhm                               kube-system
	21a7b04b3aa18       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   0                   4b282a34c4d98       coredns-66bc5c9577-fvls7                    kube-system
	925d395db73b9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      7 minutes ago       Exited              kindnet-cni               0                   0a3c75fa75530       kindnet-v9qhm                               kube-system
	10f77c7ed3607       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Exited              kube-proxy                0                   1366df45696e6       kube-proxy-lzmlr                            kube-system
	845e9b22f7c9c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      8 minutes ago       Exited              kube-controller-manager   0                   695d7ad4f4a81       kube-controller-manager-functional-656017   kube-system
	d2f21e90bdfb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      8 minutes ago       Exited              kube-scheduler            0                   ffea8b2f09a06       kube-scheduler-functional-656017            kube-system
	de4a79c72b10d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      8 minutes ago       Exited              etcd                      0                   65ab1cbe95671       etcd-functional-656017                      kube-system
	
	
	==> coredns [21a7b04b3aa18668b05a7e74c0c854e868c19d467f7d6cb885dc923426d1175d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40376 - 33371 "HINFO IN 7412649040229934986.2398417335010100422. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.479800978s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10fe1a825ece4e7e5704e2cc7128f0a02fac89b9c51059a6cdd22793ea14365] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52631 - 8649 "HINFO IN 7064989214614263487.4096664645323782067. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.119610551s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-656017
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-656017
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=functional-656017
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_26_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:26:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-656017
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:34:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:33:19 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:33:19 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:33:19 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:33:19 +0000   Sun, 26 Oct 2025 14:26:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-656017
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                48b09da4-51f5-4aad-ba21-72df28aa14f3
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-cnh5r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     hello-node-connect-7d85dfc575-5l852           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-fvls7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m58s
	  kube-system                 etcd-functional-656017                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m3s
	  kube-system                 kindnet-v9qhm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m58s
	  kube-system                 kube-apiserver-functional-656017              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-controller-manager-functional-656017     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-proxy-lzmlr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-scheduler-functional-656017              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wbqc8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-94hj8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m57s                kube-proxy       
	  Normal  Starting                 6m22s                kube-proxy       
	  Normal  Starting                 8m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m3s                 kubelet          Node functional-656017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m3s                 kubelet          Node functional-656017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m3s                 kubelet          Node functional-656017 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m59s                node-controller  Node functional-656017 event: Registered Node functional-656017 in Controller
	  Normal  NodeReady                7m47s                kubelet          Node functional-656017 status is now: NodeReady
	  Normal  Starting                 7m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m1s (x8 over 7m1s)  kubelet          Node functional-656017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m1s (x8 over 7m1s)  kubelet          Node functional-656017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m1s (x8 over 7m1s)  kubelet          Node functional-656017 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m37s                node-controller  Node functional-656017 event: Registered Node functional-656017 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [c376e39f3b52c0c27a12c20a752c2867f4459d455be0af813e18bb55ac82d433] <==
	{"level":"warn","ts":"2025-10-26T14:27:53.013098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.026639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.032605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.039078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.045231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.052346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.058547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.065475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.072805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.080415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.093382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.107398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.114758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.121179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.128634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.135424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.143287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.150345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.156984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.164095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.183548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.187308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.193611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.200153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.250747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36940","server-name":"","error":"EOF"}
	
	
	==> etcd [de4a79c72b10d2e604ace86f200c4beb93e1fa32406f916e7886b8232029ecce] <==
	{"level":"warn","ts":"2025-10-26T14:26:27.278460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.284638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.291821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.307848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.314269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.320424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.363779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T14:27:30.562035Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T14:27:30.562147Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-656017","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-26T14:27:30.562265Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:27:30.563832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:27:30.563898Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.563920Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-26T14:27:30.563978Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-26T14:27:30.564001Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564001Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564033Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564087Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:27:30.564101Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564070Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:27:30.564126Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.566120Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-26T14:27:30.566210Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.566245Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-26T14:27:30.566255Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-656017","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 14:34:33 up  2:17,  0 user,  load average: 0.25, 0.45, 0.69
	Linux functional-656017 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a00b00b881c9d02ec005d4a24113c1b2ef56dd5b59b8f853a70bead7cfbbe7b] <==
	I1026 14:32:30.518274       1 main.go:301] handling current node
	I1026 14:32:40.518465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:32:40.518508       1 main.go:301] handling current node
	I1026 14:32:50.515344       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:32:50.515381       1 main.go:301] handling current node
	I1026 14:33:00.513432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:00.513502       1 main.go:301] handling current node
	I1026 14:33:10.512203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:10.512242       1 main.go:301] handling current node
	I1026 14:33:20.512444       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:20.512489       1 main.go:301] handling current node
	I1026 14:33:30.511719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:30.511773       1 main.go:301] handling current node
	I1026 14:33:40.511778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:40.511821       1 main.go:301] handling current node
	I1026 14:33:50.518558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:33:50.518594       1 main.go:301] handling current node
	I1026 14:34:00.512731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:00.512767       1 main.go:301] handling current node
	I1026 14:34:10.512666       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:10.512708       1 main.go:301] handling current node
	I1026 14:34:20.511962       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:20.512005       1 main.go:301] handling current node
	I1026 14:34:30.512234       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:34:30.512272       1 main.go:301] handling current node
	
	
	==> kindnet [925d395db73b90f6f8405290b4a9c93369786816f1fce17a08dc90ee359443d4] <==
	I1026 14:26:36.186777       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 14:26:36.187051       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1026 14:26:36.215845       1 main.go:148] setting mtu 1500 for CNI 
	I1026 14:26:36.215876       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 14:26:36.215900       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T14:26:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 14:26:36.417338       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 14:26:36.417362       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 14:26:36.417377       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 14:26:36.417977       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 14:26:36.817566       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 14:26:36.817595       1 metrics.go:72] Registering metrics
	I1026 14:26:36.817681       1 controller.go:711] "Syncing nftables rules"
	I1026 14:26:46.417194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:26:46.417266       1 main.go:301] handling current node
	I1026 14:26:56.417258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:26:56.417297       1 main.go:301] handling current node
	I1026 14:27:06.417463       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:27:06.417500       1 main.go:301] handling current node
	I1026 14:27:16.417114       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:27:16.417154       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca2758c3b0747b23b01630bbf07b4e70b74246e999a371f52426068264bb6eaa] <==
	I1026 14:27:53.701432       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 14:27:53.701495       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 14:27:53.701772       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 14:27:53.702012       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 14:27:53.702062       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 14:27:53.706550       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 14:27:53.728989       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 14:27:53.735942       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 14:27:54.604728       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1026 14:27:54.811076       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1026 14:27:54.812437       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 14:27:54.817571       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 14:27:55.181493       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 14:27:55.270909       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 14:27:55.279925       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 14:27:55.342794       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 14:27:55.350188       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 14:27:57.331287       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 14:28:20.974427       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.174.82"}
	I1026 14:28:25.530781       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.150.87"}
	I1026 14:28:25.819878       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.185.91"}
	I1026 14:28:26.786816       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.80.146"}
	I1026 14:29:42.711630       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 14:29:42.815661       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.251.128"}
	I1026 14:29:42.828033       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.232.5"}
	
	
	==> kube-controller-manager [845e9b22f7c9cbbfd8966f1c14a659b863ea4d252c9783d731959d29e933f667] <==
	I1026 14:26:34.754690       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:26:34.754694       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 14:26:34.754715       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:26:34.754731       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 14:26:34.754876       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 14:26:34.754950       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 14:26:34.755053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 14:26:34.755068       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 14:26:34.755106       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 14:26:34.755491       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:26:34.755512       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 14:26:34.755581       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 14:26:34.755590       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:26:34.758063       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 14:26:34.758145       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 14:26:34.758211       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 14:26:34.758222       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 14:26:34.758230       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 14:26:34.760332       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:26:34.763565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:26:34.769710       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-656017" podCIDRs=["10.244.0.0/24"]
	I1026 14:26:34.777273       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 14:26:34.780281       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:26:34.783551       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 14:26:49.706645       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [9b1b7e8dd23671d7230b504e22a6194c4a3dded87c9396dc7558bfcd19bfd0cd] <==
	I1026 14:27:57.021496       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:27:57.023787       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 14:27:57.026170       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 14:27:57.026242       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 14:27:57.026243       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 14:27:57.026443       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:27:57.026555       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:27:57.026577       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 14:27:57.027368       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 14:27:57.027394       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 14:27:57.027445       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 14:27:57.027452       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 14:27:57.027571       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 14:27:57.027828       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 14:27:57.028511       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 14:27:57.028539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 14:27:57.028779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:27:57.030794       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:27:57.042225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 14:29:42.761746       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.767312       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.768859       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.770695       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.772025       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.777667       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [10f77c7ed3607a9f2e9e1b7386954e24cfa2bd9656b621d9976d0cb1df09d688] <==
	I1026 14:26:36.002635       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:26:36.075108       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:26:36.175750       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:26:36.175810       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:26:36.175960       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:26:36.195046       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:26:36.195099       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:26:36.200612       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:26:36.201019       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:26:36.201037       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:26:36.202299       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:26:36.202325       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:26:36.202346       1 config.go:200] "Starting service config controller"
	I1026 14:26:36.202368       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:26:36.202676       1 config.go:309] "Starting node config controller"
	I1026 14:26:36.202774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:26:36.202783       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:26:36.202358       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:26:36.203225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:26:36.302515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:26:36.303744       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 14:26:36.303782       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [ac2a4f4184e61b6f3e212173f5fd287a266fe9854e7e6d92cc6cc308487c717e] <==
	E1026 14:27:20.271682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:21.330105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:23.056459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:28.667874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:50.258783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1026 14:28:11.271153       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:28:11.271219       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:28:11.271337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:28:11.291259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:28:11.291313       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:28:11.297113       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:28:11.297507       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:28:11.297534       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:28:11.298650       1 config.go:200] "Starting service config controller"
	I1026 14:28:11.298673       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:28:11.298713       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:28:11.298733       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:28:11.298740       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:28:11.298771       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:28:11.298847       1 config.go:309] "Starting node config controller"
	I1026 14:28:11.298855       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:28:11.298862       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:28:11.399320       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 14:28:11.399377       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:28:11.399421       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d2f21e90bdfb20b52baadfea12c2c6a9a2d85eb2e69ebba8e079dbf1272e4e5a] <==
	E1026 14:26:27.781998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:26:27.781790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 14:26:27.782054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:26:27.782048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:26:27.782186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:26:27.782196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:26:27.782259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:26:27.782261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:26:27.782256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 14:26:27.782354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:26:28.762282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:26:28.766433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:26:28.770439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:26:28.779033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:26:28.783221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:26:28.788274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:26:28.922828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:26:28.963889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1026 14:26:29.376995       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:30.452222       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:30.452235       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 14:27:30.452275       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 14:27:30.452300       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 14:27:30.452329       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 14:27:30.452358       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff718389fd0d587773fca5861c2ecbbf06d2e55df34a9daf84bc2a88de39e750] <==
	I1026 14:27:52.332052       1 serving.go:386] Generated self-signed cert in-memory
	W1026 14:27:53.620664       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 14:27:53.620791       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 14:27:53.620809       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 14:27:53.620819       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 14:27:53.647470       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 14:27:53.647583       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:27:53.649836       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:53.649877       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:53.650229       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 14:27:53.650262       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 14:27:53.751063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:31:17 functional-656017 kubelet[4154]: E1026 14:31:17.940697    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unaut
henticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-94hj8" podUID="daeb2d33-bcbf-4fc6-b399-1d9cb26423cd"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.474835    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.474905    4154 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.475126    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(303ca5dd-0848-4899-89c5-86a1cf327162): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.475210    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="303ca5dd-0848-4899-89c5-86a1cf327162"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.475818    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.475862    4154 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476017    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-cnh5r_default(c9b890f0-43a9-4379-af0d-c767a40fb9a2): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476319    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476584    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476631    4154 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.476818    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-5l852_default(712a7bde-503f-4d52-bb5e-f79f7ce120a7): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 26 14:32:49 functional-656017 kubelet[4154]: E1026 14:32:49.477112    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:33:01 functional-656017 kubelet[4154]: E1026 14:33:01.309598    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:33:01 functional-656017 kubelet[4154]: E1026 14:33:01.309951    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="303ca5dd-0848-4899-89c5-86a1cf327162"
	Oct 26 14:33:01 functional-656017 kubelet[4154]: E1026 14:33:01.310030    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:33:13 functional-656017 kubelet[4154]: E1026 14:33:13.309883    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:33:13 functional-656017 kubelet[4154]: E1026 14:33:13.309964    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:33:24 functional-656017 kubelet[4154]: E1026 14:33:24.309616    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:33:27 functional-656017 kubelet[4154]: E1026 14:33:27.309404    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:33:50 functional-656017 kubelet[4154]: E1026 14:33:50.781655    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98
765c"
	Oct 26 14:33:50 functional-656017 kubelet[4154]: E1026 14:33:50.781725    4154 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 26 14:33:50 functional-656017 kubelet[4154]: E1026 14:33:50.783318    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8_kubernetes-dashboard(3cfdea14-a298-4176-999a-892bdf252dfc): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/inc
rease-rate-limit" logger="UnhandledError"
	Oct 26 14:33:50 functional-656017 kubelet[4154]: E1026 14:33:50.783403    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wb
qc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	Oct 26 14:34:03 functional-656017 kubelet[4154]: E1026 14:34:03.311240    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests
: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wbqc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	
	
	==> storage-provisioner [2db865dc7b06999bd7ed228e936a8c3317814c951c195fea0ce4636cd813806f] <==
	W1026 14:34:08.473583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:10.476863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:10.481260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:12.484732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:12.489129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:14.492131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:14.497222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:16.500624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:16.504311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:18.507802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:18.511615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:20.515478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:20.519927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:22.523570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:22.527682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:24.530861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:24.534716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:26.538038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:26.542656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:28.545899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:28.550000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:30.553399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:30.558582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:32.561508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:34:32.565484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dae3a7eefaef6e45c44cb57f835e13d80ec46cebc496a065528ebef1b3f3dc50] <==
	I1026 14:27:20.179254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 14:27:20.181189       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-656017 -n functional-656017
helpers_test.go:269: (dbg) Run:  kubectl --context functional-656017 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8: exit status 1 (86.53649ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:36 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d72feea3741400a60884b44aace0d50fa00c8a56e531a1e8aeeb4607f039e166
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 26 Oct 2025 14:29:34 +0000
	      Finished:     Sun, 26 Oct 2025 14:29:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xp9j7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xp9j7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m57s  default-scheduler  Successfully assigned default/busybox-mount to functional-656017
	  Normal  Pulling    5m57s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m     kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 730ms (57.013s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m     kubelet            Created container: mount-munger
	  Normal  Started    5m     kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-cnh5r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ks69p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ks69p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m7s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-cnh5r to functional-656017
	  Warning  Failed     105s (x3 over 6m7s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     105s (x3 over 6m7s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    70s (x5 over 6m6s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     70s (x5 over 6m6s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    59s (x4 over 6m7s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-5l852
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjv8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mjv8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m8s                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5l852 to functional-656017
	  Warning  Failed     105s (x3 over 6m7s)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     105s (x3 over 6m7s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    67s (x5 over 6m6s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     67s (x5 over 6m6s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    53s (x4 over 6m8s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8slwm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-8slwm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-656017
	  Warning  Failed     5m1s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s (x2 over 5m1s)  kubelet            Error: ErrImagePull
	  Warning  Failed     105s                 kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    93s (x2 over 5m1s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     93s (x2 over 5m1s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    80s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wbqc8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-94hj8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-656017 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-7nm86" [18816fa5-7b10-470a-ae7a-0f0514bb3485] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-656017 -n functional-656017
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-10-26 14:44:43.304615331 +0000 UTC m=+1836.860012718
functional_test.go:1804: (dbg) Run:  kubectl --context functional-656017 describe po mysql-5bb876957f-7nm86 -n default
functional_test.go:1804: (dbg) kubectl --context functional-656017 describe po mysql-5bb876957f-7nm86 -n default:
Name:             mysql-5bb876957f-7nm86
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-656017/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:34:42 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gq5ll (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gq5ll:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-7nm86 to functional-656017
Warning  Failed     5m32s                 kubelet            Failed to pull image "docker.io/mysql:5.7": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     103s (x2 over 5m32s)  kubelet            Error: ErrImagePull
Warning  Failed     103s                  kubelet            Failed to pull image "docker.io/mysql:5.7": unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    90s (x2 over 5m31s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     90s (x2 over 5m31s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    78s (x3 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-656017 logs mysql-5bb876957f-7nm86 -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-656017 logs mysql-5bb876957f-7nm86 -n default: exit status 1 (73.589763ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-7nm86" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-656017 logs mysql-5bb876957f-7nm86 -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-656017
helpers_test.go:243: (dbg) docker inspect functional-656017:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51",
	        "Created": "2025-10-26T14:26:15.662564705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 871791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T14:26:15.695471555Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/hostname",
	        "HostsPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/hosts",
	        "LogPath": "/var/lib/docker/containers/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51/7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51-json.log",
	        "Name": "/functional-656017",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-656017:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-656017",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7e6d295c9fb0658013110fcfbf0f4bc24425e109a2fc79f6866a52b634876e51",
	                "LowerDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/merged",
	                "UpperDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/diff",
	                "WorkDir": "/var/lib/docker/overlay2/18941be66dc438b29a0d8f1b6cfb5e94b3f5364e7ff8ec834dc7e25ed24e4d78/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-656017",
	                "Source": "/var/lib/docker/volumes/functional-656017/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-656017",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-656017",
	                "name.minikube.sigs.k8s.io": "functional-656017",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0491f8ee0884dffdfb60cf16586bdc089924ef954ce989676a59241184322961",
	            "SandboxKey": "/var/run/docker/netns/0491f8ee0884",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33546"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33547"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33550"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33548"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33549"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-656017": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:57:b7:c3:e1:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "530477eeed5a5cd1e2f2740c0bd7a64c9f8fbcffeceb135f9b5907f3c53af82d",
	                    "EndpointID": "0da32925992ecd2bb8901e4bfaa39ba5c2a59c288a0aa7b60b19bd3c9f7d4c8f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-656017",
	                        "7e6d295c9fb0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-656017 -n functional-656017
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-656017 logs -n 25: (1.290137488s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-656017 image ls                                                                                                │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image save --daemon kicbase/echo-server:functional-656017 --alsologtostderr                             │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/ssl/certs/845095.pem                                                                  │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /usr/share/ca-certificates/845095.pem                                                      │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/ssl/certs/51391683.0                                                                  │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/ssl/certs/8450952.pem                                                                 │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /usr/share/ca-certificates/8450952.pem                                                     │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                  │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh sudo cat /etc/test/nested/copy/845095/hosts                                                         │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls --format short --alsologtostderr                                                               │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls --format yaml --alsologtostderr                                                                │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ ssh            │ functional-656017 ssh pgrep buildkitd                                                                                     │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │                     │
	│ image          │ functional-656017 image build -t localhost/my-image:functional-656017 testdata/build --alsologtostderr                    │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls                                                                                                │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls --format json --alsologtostderr                                                                │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ image          │ functional-656017 image ls --format table --alsologtostderr                                                               │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ update-context │ functional-656017 update-context --alsologtostderr -v=2                                                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ update-context │ functional-656017 update-context --alsologtostderr -v=2                                                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ update-context │ functional-656017 update-context --alsologtostderr -v=2                                                                   │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:34 UTC │ 26 Oct 25 14:34 UTC │
	│ service        │ functional-656017 service list                                                                                            │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:38 UTC │ 26 Oct 25 14:38 UTC │
	│ service        │ functional-656017 service list -o json                                                                                    │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:38 UTC │ 26 Oct 25 14:38 UTC │
	│ service        │ functional-656017 service --namespace=default --https --url hello-node                                                    │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:38 UTC │                     │
	│ service        │ functional-656017 service hello-node --url --format={{.IP}}                                                               │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:38 UTC │                     │
	│ service        │ functional-656017 service hello-node --url                                                                                │ functional-656017 │ jenkins │ v1.37.0 │ 26 Oct 25 14:38 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:29:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:29:41.706256  884792 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:29:41.706528  884792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.706536  884792 out.go:374] Setting ErrFile to fd 2...
	I1026 14:29:41.706540  884792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.706726  884792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:29:41.707221  884792 out.go:368] Setting JSON to false
	I1026 14:29:41.708137  884792 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7930,"bootTime":1761481052,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:29:41.708256  884792 start.go:141] virtualization: kvm guest
	I1026 14:29:41.710295  884792 out.go:179] * [functional-656017] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:29:41.711616  884792 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:29:41.711623  884792 notify.go:220] Checking for updates...
	I1026 14:29:41.713376  884792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:29:41.714796  884792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:29:41.716100  884792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:29:41.717345  884792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:29:41.718672  884792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:29:41.720405  884792 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:29:41.720928  884792 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:29:41.745671  884792 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:29:41.745765  884792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.803208  884792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.791510406 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.803325  884792 docker.go:318] overlay module found
	I1026 14:29:41.805202  884792 out.go:179] * Using the docker driver based on existing profile
	I1026 14:29:41.806275  884792 start.go:305] selected driver: docker
	I1026 14:29:41.806287  884792 start.go:925] validating driver "docker" against &{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.806380  884792 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:29:41.806469  884792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.862329  884792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.852410907 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.863025  884792 cni.go:84] Creating CNI manager for ""
	I1026 14:29:41.863097  884792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 14:29:41.863156  884792 start.go:349] cluster config:
	{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.864878  884792 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 26 14:43:19 functional-656017 crio[3608]: time="2025-10-26T14:43:19.309899125Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=d5034030-ce4d-4c42-bdfd-9964ed7f472b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:19 functional-656017 crio[3608]: time="2025-10-26T14:43:19.310093043Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=d5034030-ce4d-4c42-bdfd-9964ed7f472b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:19 functional-656017 crio[3608]: time="2025-10-26T14:43:19.310130439Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=d5034030-ce4d-4c42-bdfd-9964ed7f472b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:25 functional-656017 crio[3608]: time="2025-10-26T14:43:25.309766716Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=e749e704-4973-4f79-acac-abf5a097002d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:25 functional-656017 crio[3608]: time="2025-10-26T14:43:25.30993333Z" level=info msg="Image docker.io/mysql:5.7 not found" id=e749e704-4973-4f79-acac-abf5a097002d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:25 functional-656017 crio[3608]: time="2025-10-26T14:43:25.309970512Z" level=info msg="Neither image nor artfiact docker.io/mysql:5.7 found" id=e749e704-4973-4f79-acac-abf5a097002d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:31 functional-656017 crio[3608]: time="2025-10-26T14:43:31.30062908Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 26 14:43:31 functional-656017 crio[3608]: time="2025-10-26T14:43:31.309954584Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=084a0c50-9034-4f25-b746-c4d4142fc379 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:31 functional-656017 crio[3608]: time="2025-10-26T14:43:31.310118787Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=084a0c50-9034-4f25-b746-c4d4142fc379 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:31 functional-656017 crio[3608]: time="2025-10-26T14:43:31.310154981Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c found" id=084a0c50-9034-4f25-b746-c4d4142fc379 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:45 functional-656017 crio[3608]: time="2025-10-26T14:43:45.895470952Z" level=info msg="Pulling image: docker.io/nginx:latest" id=55f046a6-126f-4e2e-8092-de2bf187a0b8 name=/runtime.v1.ImageService/PullImage
	Oct 26 14:43:45 functional-656017 crio[3608]: time="2025-10-26T14:43:45.89695012Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:43:57 functional-656017 crio[3608]: time="2025-10-26T14:43:57.309429423Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=f92467d9-c799-4fe5-b92b-e1a1209a2ca9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:57 functional-656017 crio[3608]: time="2025-10-26T14:43:57.3096522Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=f92467d9-c799-4fe5-b92b-e1a1209a2ca9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:43:57 functional-656017 crio[3608]: time="2025-10-26T14:43:57.309707195Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=f92467d9-c799-4fe5-b92b-e1a1209a2ca9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:44:10 functional-656017 crio[3608]: time="2025-10-26T14:44:10.309808362Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=9aac3a4b-3680-48b4-b76a-8ac9af86400a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:44:10 functional-656017 crio[3608]: time="2025-10-26T14:44:10.310035468Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=9aac3a4b-3680-48b4-b76a-8ac9af86400a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:44:10 functional-656017 crio[3608]: time="2025-10-26T14:44:10.310104185Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=9aac3a4b-3680-48b4-b76a-8ac9af86400a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:44:16 functional-656017 crio[3608]: time="2025-10-26T14:44:16.546516534Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 26 14:44:22 functional-656017 crio[3608]: time="2025-10-26T14:44:22.310549421Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=00a47595-5c2a-4b2e-88c9-c2ebc905bee8 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:44:22 functional-656017 crio[3608]: time="2025-10-26T14:44:22.310774319Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=00a47595-5c2a-4b2e-88c9-c2ebc905bee8 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:44:22 functional-656017 crio[3608]: time="2025-10-26T14:44:22.310845597Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=00a47595-5c2a-4b2e-88c9-c2ebc905bee8 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:44:36 functional-656017 crio[3608]: time="2025-10-26T14:44:36.31018479Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=5dc44e2a-750d-4642-bf19-baa448c70446 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:44:36 functional-656017 crio[3608]: time="2025-10-26T14:44:36.310428405Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=5dc44e2a-750d-4642-bf19-baa448c70446 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 14:44:36 functional-656017 crio[3608]: time="2025-10-26T14:44:36.310482448Z" level=info msg="Neither image nor artfiact docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 found" id=5dc44e2a-750d-4642-bf19-baa448c70446 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d72feea374140       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   15 minutes ago      Exited              mount-munger              0                   321356887ae61       busybox-mount                               default
	ae1aa39570023       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e       16 minutes ago      Running             nginx                     0                   6f2dd0f292d4d       nginx-svc                                   default
	2db865dc7b069       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago      Running             storage-provisioner       2                   e1cf8e1f14fd9       storage-provisioner                         kube-system
	ca2758c3b0747       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      16 minutes ago      Running             kube-apiserver            0                   b8dc315566e53       kube-apiserver-functional-656017            kube-system
	9b1b7e8dd2367       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      16 minutes ago      Running             kube-controller-manager   1                   695d7ad4f4a81       kube-controller-manager-functional-656017   kube-system
	ff718389fd0d5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      16 minutes ago      Running             kube-scheduler            1                   ffea8b2f09a06       kube-scheduler-functional-656017            kube-system
	c376e39f3b52c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      16 minutes ago      Running             etcd                      1                   65ab1cbe95671       etcd-functional-656017                      kube-system
	dae3a7eefaef6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       1                   e1cf8e1f14fd9       storage-provisioner                         kube-system
	f10fe1a825ece       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      17 minutes ago      Running             coredns                   1                   4b282a34c4d98       coredns-66bc5c9577-fvls7                    kube-system
	ac2a4f4184e61       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      17 minutes ago      Running             kube-proxy                1                   1366df45696e6       kube-proxy-lzmlr                            kube-system
	3a00b00b881c9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      17 minutes ago      Running             kindnet-cni               1                   0a3c75fa75530       kindnet-v9qhm                               kube-system
	21a7b04b3aa18       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      17 minutes ago      Exited              coredns                   0                   4b282a34c4d98       coredns-66bc5c9577-fvls7                    kube-system
	925d395db73b9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      18 minutes ago      Exited              kindnet-cni               0                   0a3c75fa75530       kindnet-v9qhm                               kube-system
	10f77c7ed3607       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      18 minutes ago      Exited              kube-proxy                0                   1366df45696e6       kube-proxy-lzmlr                            kube-system
	845e9b22f7c9c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      18 minutes ago      Exited              kube-controller-manager   0                   695d7ad4f4a81       kube-controller-manager-functional-656017   kube-system
	d2f21e90bdfb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      18 minutes ago      Exited              kube-scheduler            0                   ffea8b2f09a06       kube-scheduler-functional-656017            kube-system
	de4a79c72b10d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago      Exited              etcd                      0                   65ab1cbe95671       etcd-functional-656017                      kube-system
	
	
	==> coredns [21a7b04b3aa18668b05a7e74c0c854e868c19d467f7d6cb885dc923426d1175d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40376 - 33371 "HINFO IN 7412649040229934986.2398417335010100422. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.479800978s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f10fe1a825ece4e7e5704e2cc7128f0a02fac89b9c51059a6cdd22793ea14365] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52631 - 8649 "HINFO IN 7064989214614263487.4096664645323782067. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.119610551s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-656017
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-656017
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=functional-656017
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_26_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:26:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-656017
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:44:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:44:42 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:44:42 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:44:42 +0000   Sun, 26 Oct 2025 14:26:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:44:42 +0000   Sun, 26 Oct 2025 14:26:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-656017
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                48b09da4-51f5-4aad-ba21-72df28aa14f3
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-cnh5r                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     hello-node-connect-7d85dfc575-5l852           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     mysql-5bb876957f-7nm86                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-66bc5c9577-fvls7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18m
	  kube-system                 etcd-functional-656017                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         18m
	  kube-system                 kindnet-v9qhm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18m
	  kube-system                 kube-apiserver-functional-656017              250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-functional-656017     200m (2%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-lzmlr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-functional-656017              100m (1%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wbqc8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-94hj8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node functional-656017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node functional-656017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node functional-656017 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node functional-656017 event: Registered Node functional-656017 in Controller
	  Normal  NodeReady                17m                kubelet          Node functional-656017 status is now: NodeReady
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node functional-656017 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-656017 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node functional-656017 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node functional-656017 event: Registered Node functional-656017 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [c376e39f3b52c0c27a12c20a752c2867f4459d455be0af813e18bb55ac82d433] <==
	{"level":"warn","ts":"2025-10-26T14:27:53.058547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.065475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.072805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.080415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.093382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.107398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.114758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.121179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.128634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.135424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.143287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.150345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.156984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.164095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.183548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.187308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.193611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.200153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:27:53.250747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36940","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T14:37:52.741358Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":998}
	{"level":"info","ts":"2025-10-26T14:37:52.750548Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":998,"took":"8.174187ms","hash":1610504942,"current-db-size-bytes":3313664,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":3313664,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2025-10-26T14:37:52.750610Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1610504942,"revision":998,"compact-revision":-1}
	{"level":"info","ts":"2025-10-26T14:42:52.746931Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1343}
	{"level":"info","ts":"2025-10-26T14:42:52.749977Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1343,"took":"2.707662ms","hash":2799218699,"current-db-size-bytes":3313664,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-10-26T14:42:52.750011Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2799218699,"revision":1343,"compact-revision":998}
	
	
	==> etcd [de4a79c72b10d2e604ace86f200c4beb93e1fa32406f916e7886b8232029ecce] <==
	{"level":"warn","ts":"2025-10-26T14:26:27.278460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.284638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.291821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.307848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.314269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.320424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T14:26:27.363779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T14:27:30.562035Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T14:27:30.562147Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-656017","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-26T14:27:30.562265Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:27:30.563832Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T14:27:30.563898Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.563920Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-26T14:27:30.563978Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-26T14:27:30.564001Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564001Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564033Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564087Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:27:30.564101Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-26T14:27:30.564070Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T14:27:30.564126Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.566120Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-26T14:27:30.566210Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T14:27:30.566245Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-26T14:27:30.566255Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-656017","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 14:44:44 up  2:27,  0 user,  load average: 0.36, 0.18, 0.40
	Linux functional-656017 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3a00b00b881c9d02ec005d4a24113c1b2ef56dd5b59b8f853a70bead7cfbbe7b] <==
	I1026 14:42:40.515330       1 main.go:301] handling current node
	I1026 14:42:50.518475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:42:50.518514       1 main.go:301] handling current node
	I1026 14:43:00.512617       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:43:00.512676       1 main.go:301] handling current node
	I1026 14:43:10.512089       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:43:10.512123       1 main.go:301] handling current node
	I1026 14:43:20.511717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:43:20.511751       1 main.go:301] handling current node
	I1026 14:43:30.511868       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:43:30.511914       1 main.go:301] handling current node
	I1026 14:43:40.511840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:43:40.511882       1 main.go:301] handling current node
	I1026 14:43:50.511843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:43:50.511877       1 main.go:301] handling current node
	I1026 14:44:00.513381       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:44:00.513433       1 main.go:301] handling current node
	I1026 14:44:10.511921       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:44:10.511965       1 main.go:301] handling current node
	I1026 14:44:20.518110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:44:20.518153       1 main.go:301] handling current node
	I1026 14:44:30.512642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:44:30.512674       1 main.go:301] handling current node
	I1026 14:44:40.513095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:44:40.513138       1 main.go:301] handling current node
	
	
	==> kindnet [925d395db73b90f6f8405290b4a9c93369786816f1fce17a08dc90ee359443d4] <==
	I1026 14:26:36.186777       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 14:26:36.187051       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1026 14:26:36.215845       1 main.go:148] setting mtu 1500 for CNI 
	I1026 14:26:36.215876       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 14:26:36.215900       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T14:26:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 14:26:36.417338       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 14:26:36.417362       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 14:26:36.417377       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 14:26:36.417977       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 14:26:36.817566       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 14:26:36.817595       1 metrics.go:72] Registering metrics
	I1026 14:26:36.817681       1 controller.go:711] "Syncing nftables rules"
	I1026 14:26:46.417194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:26:46.417266       1 main.go:301] handling current node
	I1026 14:26:56.417258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:26:56.417297       1 main.go:301] handling current node
	I1026 14:27:06.417463       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:27:06.417500       1 main.go:301] handling current node
	I1026 14:27:16.417114       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 14:27:16.417154       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca2758c3b0747b23b01630bbf07b4e70b74246e999a371f52426068264bb6eaa] <==
	I1026 14:27:53.701772       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 14:27:53.702012       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 14:27:53.702062       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 14:27:53.706550       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 14:27:53.728989       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 14:27:53.735942       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 14:27:54.604728       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1026 14:27:54.811076       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1026 14:27:54.812437       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 14:27:54.817571       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 14:27:55.181493       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 14:27:55.270909       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 14:27:55.279925       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 14:27:55.342794       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 14:27:55.350188       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 14:27:57.331287       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 14:28:20.974427       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.174.82"}
	I1026 14:28:25.530781       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.150.87"}
	I1026 14:28:25.819878       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.185.91"}
	I1026 14:28:26.786816       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.80.146"}
	I1026 14:29:42.711630       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 14:29:42.815661       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.251.128"}
	I1026 14:29:42.828033       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.232.5"}
	I1026 14:34:42.920951       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.97.210"}
	I1026 14:37:53.625269       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [845e9b22f7c9cbbfd8966f1c14a659b863ea4d252c9783d731959d29e933f667] <==
	I1026 14:26:34.754690       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 14:26:34.754694       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 14:26:34.754715       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:26:34.754731       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 14:26:34.754876       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 14:26:34.754950       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 14:26:34.755053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 14:26:34.755068       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 14:26:34.755106       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 14:26:34.755491       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:26:34.755512       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 14:26:34.755581       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 14:26:34.755590       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:26:34.758063       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 14:26:34.758145       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 14:26:34.758211       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 14:26:34.758222       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 14:26:34.758230       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 14:26:34.760332       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:26:34.763565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:26:34.769710       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-656017" podCIDRs=["10.244.0.0/24"]
	I1026 14:26:34.777273       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 14:26:34.780281       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:26:34.783551       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 14:26:49.706645       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [9b1b7e8dd23671d7230b504e22a6194c4a3dded87c9396dc7558bfcd19bfd0cd] <==
	I1026 14:27:57.021496       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 14:27:57.023787       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 14:27:57.026170       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 14:27:57.026242       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 14:27:57.026243       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 14:27:57.026443       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 14:27:57.026555       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 14:27:57.026577       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 14:27:57.027368       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 14:27:57.027394       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 14:27:57.027445       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 14:27:57.027452       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 14:27:57.027571       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 14:27:57.027828       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 14:27:57.028511       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 14:27:57.028539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 14:27:57.028779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 14:27:57.030794       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 14:27:57.042225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 14:29:42.761746       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.767312       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.768859       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.770695       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.772025       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 14:29:42.777667       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [10f77c7ed3607a9f2e9e1b7386954e24cfa2bd9656b621d9976d0cb1df09d688] <==
	I1026 14:26:36.002635       1 server_linux.go:53] "Using iptables proxy"
	I1026 14:26:36.075108       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:26:36.175750       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:26:36.175810       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:26:36.175960       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:26:36.195046       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:26:36.195099       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:26:36.200612       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:26:36.201019       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:26:36.201037       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:26:36.202299       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:26:36.202325       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:26:36.202346       1 config.go:200] "Starting service config controller"
	I1026 14:26:36.202368       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:26:36.202676       1 config.go:309] "Starting node config controller"
	I1026 14:26:36.202774       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:26:36.202783       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:26:36.202358       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:26:36.203225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:26:36.302515       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:26:36.303744       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 14:26:36.303782       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [ac2a4f4184e61b6f3e212173f5fd287a266fe9854e7e6d92cc6cc308487c717e] <==
	E1026 14:27:20.271682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:21.330105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:23.056459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:28.667874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:27:50.258783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-656017&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1026 14:28:11.271153       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:28:11.271219       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 14:28:11.271337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:28:11.291259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 14:28:11.291313       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:28:11.297113       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:28:11.297507       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:28:11.297534       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:28:11.298650       1 config.go:200] "Starting service config controller"
	I1026 14:28:11.298673       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:28:11.298713       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:28:11.298733       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:28:11.298740       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:28:11.298771       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:28:11.298847       1 config.go:309] "Starting node config controller"
	I1026 14:28:11.298855       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:28:11.298862       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:28:11.399320       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 14:28:11.399377       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 14:28:11.399421       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d2f21e90bdfb20b52baadfea12c2c6a9a2d85eb2e69ebba8e079dbf1272e4e5a] <==
	E1026 14:26:27.781998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:26:27.781790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 14:26:27.782054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:26:27.782048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:26:27.782186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:26:27.782196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:26:27.782259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:26:27.782261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:26:27.782256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 14:26:27.782354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:26:28.762282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 14:26:28.766433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:26:28.770439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:26:28.779033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:26:28.783221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:26:28.788274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:26:28.922828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:26:28.963889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1026 14:26:29.376995       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:30.452222       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:30.452235       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 14:27:30.452275       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 14:27:30.452300       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 14:27:30.452329       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 14:27:30.452358       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff718389fd0d587773fca5861c2ecbbf06d2e55df34a9daf84bc2a88de39e750] <==
	I1026 14:27:52.332052       1 serving.go:386] Generated self-signed cert in-memory
	W1026 14:27:53.620664       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 14:27:53.620791       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 14:27:53.620809       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 14:27:53.620819       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 14:27:53.647470       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 14:27:53.647583       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:27:53.649836       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:53.649877       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 14:27:53.650229       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 14:27:53.650262       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 14:27:53.751063       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:42:40 functional-656017 kubelet[4154]: E1026 14:42:40.309577    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:42:45 functional-656017 kubelet[4154]: E1026 14:42:45.309631    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:42:50 functional-656017 kubelet[4154]: E1026 14:42:50.310401    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wbqc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	Oct 26 14:42:53 functional-656017 kubelet[4154]: E1026 14:42:53.309386    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:42:59 functional-656017 kubelet[4154]: E1026 14:42:59.309666    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:43:00 functional-656017 kubelet[4154]: E1026 14:43:00.662328    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 26 14:43:00 functional-656017 kubelet[4154]: E1026 14:43:00.662402    4154 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 26 14:43:00 functional-656017 kubelet[4154]: E1026 14:43:00.662645    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-7nm86_default(18816fa5-7b10-470a-ae7a-0f0514bb3485): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 26 14:43:00 functional-656017 kubelet[4154]: E1026 14:43:00.662723    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7nm86" podUID="18816fa5-7b10-470a-ae7a-0f0514bb3485"
	Oct 26 14:43:05 functional-656017 kubelet[4154]: E1026 14:43:05.311042    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wbqc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	Oct 26 14:43:06 functional-656017 kubelet[4154]: E1026 14:43:06.309305    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:43:12 functional-656017 kubelet[4154]: E1026 14:43:12.309763    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:43:13 functional-656017 kubelet[4154]: E1026 14:43:13.309774    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7nm86" podUID="18816fa5-7b10-470a-ae7a-0f0514bb3485"
	Oct 26 14:43:17 functional-656017 kubelet[4154]: E1026 14:43:17.310094    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:43:19 functional-656017 kubelet[4154]: E1026 14:43:19.310486    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wbqc8" podUID="3cfdea14-a298-4176-999a-892bdf252dfc"
	Oct 26 14:43:27 functional-656017 kubelet[4154]: E1026 14:43:27.309771    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-cnh5r" podUID="c9b890f0-43a9-4379-af0d-c767a40fb9a2"
	Oct 26 14:43:32 functional-656017 kubelet[4154]: E1026 14:43:32.309706    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-5l852" podUID="712a7bde-503f-4d52-bb5e-f79f7ce120a7"
	Oct 26 14:43:45 functional-656017 kubelet[4154]: E1026 14:43:45.895024    4154 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 26 14:43:45 functional-656017 kubelet[4154]: E1026 14:43:45.895096    4154 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 26 14:43:45 functional-656017 kubelet[4154]: E1026 14:43:45.895330    4154 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-94hj8_kubernetes-dashboard(daeb2d33-bcbf-4fc6-b399-1d9cb26423cd): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image" logger="UnhandledError"
	Oct 26 14:43:45 functional-656017 kubelet[4154]: E1026 14:43:45.895401    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-94hj8" podUID="daeb2d33-bcbf-4fc6-b399-1d9cb26423cd"
	Oct 26 14:43:57 functional-656017 kubelet[4154]: E1026 14:43:57.310130    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-94hj8" podUID="daeb2d33-bcbf-4fc6-b399-1d9cb26423cd"
	Oct 26 14:44:10 functional-656017 kubelet[4154]: E1026 14:44:10.310459    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-94hj8" podUID="daeb2d33-bcbf-4fc6-b399-1d9cb26423cd"
	Oct 26 14:44:22 functional-656017 kubelet[4154]: E1026 14:44:22.311214    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-94hj8" podUID="daeb2d33-bcbf-4fc6-b399-1d9cb26423cd"
	Oct 26 14:44:36 functional-656017 kubelet[4154]: E1026 14:44:36.310852    4154 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: provided artifact is a container image\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-94hj8" podUID="daeb2d33-bcbf-4fc6-b399-1d9cb26423cd"
	
	
	==> storage-provisioner [2db865dc7b06999bd7ed228e936a8c3317814c951c195fea0ce4636cd813806f] <==
	W1026 14:44:20.848278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:22.851729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:22.857052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:24.860188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:24.864151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:26.867590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:26.875371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:28.879072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:28.883273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:30.887032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:30.891010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:32.894364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:32.898339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:34.901943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:34.905770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:36.909214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:36.913308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:38.916515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:38.921413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:40.925327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:40.929331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:42.932738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:42.936897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:44.939995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:44:44.944556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dae3a7eefaef6e45c44cb57f835e13d80ec46cebc496a065528ebef1b3f3dc50] <==
	I1026 14:27:20.179254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 14:27:20.181189       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-656017 -n functional-656017
helpers_test.go:269: (dbg) Run:  kubectl --context functional-656017 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8: exit status 1 (100.625578ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:36 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d72feea3741400a60884b44aace0d50fa00c8a56e531a1e8aeeb4607f039e166
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 26 Oct 2025 14:29:34 +0000
	      Finished:     Sun, 26 Oct 2025 14:29:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xp9j7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xp9j7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  16m   default-scheduler  Successfully assigned default/busybox-mount to functional-656017
	  Normal  Pulling    16m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 730ms (57.013s including waiting). Image size: 4631262 bytes.
	  Normal  Created    15m   kubelet            Created container: mount-munger
	  Normal  Started    15m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-cnh5r
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ks69p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ks69p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  16m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-cnh5r to functional-656017
	  Warning  Failed     3m47s (x5 over 16m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     3m47s (x5 over 16m)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m30s (x17 over 16m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    78s (x22 over 16m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Normal   Pulling    66s (x6 over 16m)     kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-5l852
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:25 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjv8l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mjv8l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  16m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-5l852 to functional-656017
	  Warning  Failed     3m47s (x5 over 16m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     3m47s (x5 over 16m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    73s (x22 over 16m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     73s (x22 over 16m)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    62s (x6 over 16m)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-7nm86
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:34:42 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gq5ll (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gq5ll:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-7nm86 to functional-656017
	  Warning  Failed     5m34s                 kubelet            Failed to pull image "docker.io/mysql:5.7": unable to pull image or OCI artifact: pull image err: copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s (x2 over 5m34s)  kubelet            Error: ErrImagePull
	  Warning  Failed     105s                  kubelet            Failed to pull image "docker.io/mysql:5.7": unable to pull image or OCI artifact: pull image err: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    92s (x2 over 5m33s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     92s (x2 over 5m33s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    80s (x3 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-656017/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 14:28:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8slwm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-8slwm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  16m                   default-scheduler  Successfully assigned default/sp-pod to functional-656017
	  Warning  Failed     8m7s (x2 over 11m)    kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m47s (x2 over 15m)   kubelet            Failed to pull image "docker.io/nginx": unable to pull image or OCI artifact: pull image err: initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit; artifact err: get manifest: build image source: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m47s (x4 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    2m34s (x11 over 15m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m34s (x11 over 15m)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m20s (x5 over 16m)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wbqc8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-94hj8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-656017 describe pod busybox-mount hello-node-75c85bcc94-cnh5r hello-node-connect-7d85dfc575-5l852 mysql-5bb876957f-7nm86 sp-pod dashboard-metrics-scraper-77bf4d6c4c-wbqc8 kubernetes-dashboard-855c9754f9-94hj8: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-656017 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-656017 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-cnh5r" [c9b890f0-43a9-4379-af0d-c767a40fb9a2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-656017 -n functional-656017
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-26 14:38:27.121507374 +0000 UTC m=+1460.676904753
functional_test.go:1460: (dbg) Run:  kubectl --context functional-656017 describe po hello-node-75c85bcc94-cnh5r -n default
functional_test.go:1460: (dbg) kubectl --context functional-656017 describe po hello-node-75c85bcc94-cnh5r -n default:
Name:             hello-node-75c85bcc94-cnh5r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-656017/192.168.49.2
Start Time:       Sun, 26 Oct 2025 14:28:26 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ks69p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ks69p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-cnh5r to functional-656017
Warning  Failed     109s (x4 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     109s (x4 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    31s (x11 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     31s (x11 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    18s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-656017 logs hello-node-75c85bcc94-cnh5r -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-656017 logs hello-node-75c85bcc94-cnh5r -n default: exit status 1 (69.509097ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-cnh5r" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-656017 logs hello-node-75c85bcc94-cnh5r -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image load --daemon kicbase/echo-server:functional-656017 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-656017" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image load --daemon kicbase/echo-server:functional-656017 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-656017" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-656017
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image load --daemon kicbase/echo-server:functional-656017 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-656017" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image save kicbase/echo-server:functional-656017 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1026 14:34:40.192866  887982 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:34:40.193266  887982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:34:40.193344  887982 out.go:374] Setting ErrFile to fd 2...
	I1026 14:34:40.193356  887982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:34:40.193812  887982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:34:40.194855  887982 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:34:40.194977  887982 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:34:40.195425  887982 cli_runner.go:164] Run: docker container inspect functional-656017 --format={{.State.Status}}
	I1026 14:34:40.214249  887982 ssh_runner.go:195] Run: systemctl --version
	I1026 14:34:40.214316  887982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656017
	I1026 14:34:40.232398  887982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33546 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/functional-656017/id_rsa Username:docker}
	I1026 14:34:40.332002  887982 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1026 14:34:40.332090  887982 cache_images.go:254] Failed to load cached images for "functional-656017": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1026 14:34:40.332118  887982 cache_images.go:266] failed pushing to: functional-656017

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-656017
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image save --daemon kicbase/echo-server:functional-656017 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-656017
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-656017: exit status 1 (17.877353ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-656017

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-656017

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 service --namespace=default --https --url hello-node: exit status 115 (541.13787ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30998
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-656017 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 service hello-node --url --format={{.IP}}: exit status 115 (548.635662ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-656017 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 service hello-node --url: exit status 115 (552.563715ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30998
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-656017 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30998
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-439592 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-439592 --output=json --user=testUser: exit status 80 (1.595522955s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e221a6f2-6477-424b-8776-34e6a35b521f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-439592 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d0b48c56-9126-4ca9-afe7-b701f640fd7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-26T14:54:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"55aa6940-46a9-4d91-a00a-501ef5ba4c50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-439592 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.3s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-439592 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-439592 --output=json --user=testUser: exit status 80 (1.298792111s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"71844625-07dd-41f6-8177-83f37a0bf67f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-439592 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7ebf0f21-3242-4c77-9019-099d4a5a112e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-26T14:54:39Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"0164baf0-4968-43d5-a137-80c11b88cfe9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-439592 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.30s)

                                                
                                    
x
+
TestPause/serial/Pause (5.38s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-212674 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-212674 --alsologtostderr -v=5: exit status 80 (1.762882335s)

                                                
                                                
-- stdout --
	* Pausing node pause-212674 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:08:42.109354 1041023 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:08:42.109648 1041023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:08:42.109658 1041023 out.go:374] Setting ErrFile to fd 2...
	I1026 15:08:42.109662 1041023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:08:42.109905 1041023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:08:42.110222 1041023 out.go:368] Setting JSON to false
	I1026 15:08:42.110290 1041023 mustload.go:65] Loading cluster: pause-212674
	I1026 15:08:42.110729 1041023 config.go:182] Loaded profile config "pause-212674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:08:42.111152 1041023 cli_runner.go:164] Run: docker container inspect pause-212674 --format={{.State.Status}}
	I1026 15:08:42.130993 1041023 host.go:66] Checking if "pause-212674" exists ...
	I1026 15:08:42.131377 1041023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:08:42.192717 1041023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:83 SystemTime:2025-10-26 15:08:42.18283402 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:08:42.193421 1041023 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-212674 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:08:42.195671 1041023 out.go:179] * Pausing node pause-212674 ... 
	I1026 15:08:42.196926 1041023 host.go:66] Checking if "pause-212674" exists ...
	I1026 15:08:42.197245 1041023 ssh_runner.go:195] Run: systemctl --version
	I1026 15:08:42.197287 1041023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-212674
	I1026 15:08:42.216987 1041023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33742 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/pause-212674/id_rsa Username:docker}
	I1026 15:08:42.327212 1041023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:08:42.343292 1041023 pause.go:52] kubelet running: true
	I1026 15:08:42.343346 1041023 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:08:42.494806 1041023 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:08:42.494903 1041023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:08:42.574689 1041023 cri.go:89] found id: "76a92c2837df2cc25dc74f22756d489ca379c485519fd06a0bfe92725896ccda"
	I1026 15:08:42.574738 1041023 cri.go:89] found id: "8eb50db59ee41b8577135e83e629f9d3fb42e5c56de444ab31866918de5c351c"
	I1026 15:08:42.574745 1041023 cri.go:89] found id: "7e4f98966522c866a71e7bb28586342f5501fbd080d5cf5c1ab2482d0f8c18b4"
	I1026 15:08:42.574750 1041023 cri.go:89] found id: "3dc0a89ae51a3f687143967862912464f2f52ea10b2157c46206722f6aa5fa35"
	I1026 15:08:42.574755 1041023 cri.go:89] found id: "a660a07e95a04255795b415a311bea966b4e1dc2146f0a58983099689e76a894"
	I1026 15:08:42.574759 1041023 cri.go:89] found id: "acd73a72a0a4ab35dee77374f67abf27b8ef2df3b345ba0daf443d855d262c41"
	I1026 15:08:42.574763 1041023 cri.go:89] found id: "2ca34ae5c7a132c932b0fe42c366c6c93ec5457e9f726c9b563a57bc22191508"
	I1026 15:08:42.574767 1041023 cri.go:89] found id: ""
	I1026 15:08:42.574816 1041023 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:08:42.587152 1041023 retry.go:31] will retry after 205.81978ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:08:42Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:08:42.793636 1041023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:08:42.807077 1041023 pause.go:52] kubelet running: false
	I1026 15:08:42.807132 1041023 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:08:42.930556 1041023 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:08:42.930635 1041023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:08:43.001355 1041023 cri.go:89] found id: "76a92c2837df2cc25dc74f22756d489ca379c485519fd06a0bfe92725896ccda"
	I1026 15:08:43.001376 1041023 cri.go:89] found id: "8eb50db59ee41b8577135e83e629f9d3fb42e5c56de444ab31866918de5c351c"
	I1026 15:08:43.001380 1041023 cri.go:89] found id: "7e4f98966522c866a71e7bb28586342f5501fbd080d5cf5c1ab2482d0f8c18b4"
	I1026 15:08:43.001383 1041023 cri.go:89] found id: "3dc0a89ae51a3f687143967862912464f2f52ea10b2157c46206722f6aa5fa35"
	I1026 15:08:43.001386 1041023 cri.go:89] found id: "a660a07e95a04255795b415a311bea966b4e1dc2146f0a58983099689e76a894"
	I1026 15:08:43.001388 1041023 cri.go:89] found id: "acd73a72a0a4ab35dee77374f67abf27b8ef2df3b345ba0daf443d855d262c41"
	I1026 15:08:43.001391 1041023 cri.go:89] found id: "2ca34ae5c7a132c932b0fe42c366c6c93ec5457e9f726c9b563a57bc22191508"
	I1026 15:08:43.001393 1041023 cri.go:89] found id: ""
	I1026 15:08:43.001431 1041023 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:08:43.013711 1041023 retry.go:31] will retry after 527.885493ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:08:43Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:08:43.542500 1041023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:08:43.557954 1041023 pause.go:52] kubelet running: false
	I1026 15:08:43.558022 1041023 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:08:43.698365 1041023 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:08:43.698468 1041023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:08:43.781279 1041023 cri.go:89] found id: "76a92c2837df2cc25dc74f22756d489ca379c485519fd06a0bfe92725896ccda"
	I1026 15:08:43.781309 1041023 cri.go:89] found id: "8eb50db59ee41b8577135e83e629f9d3fb42e5c56de444ab31866918de5c351c"
	I1026 15:08:43.781315 1041023 cri.go:89] found id: "7e4f98966522c866a71e7bb28586342f5501fbd080d5cf5c1ab2482d0f8c18b4"
	I1026 15:08:43.781321 1041023 cri.go:89] found id: "3dc0a89ae51a3f687143967862912464f2f52ea10b2157c46206722f6aa5fa35"
	I1026 15:08:43.781326 1041023 cri.go:89] found id: "a660a07e95a04255795b415a311bea966b4e1dc2146f0a58983099689e76a894"
	I1026 15:08:43.781330 1041023 cri.go:89] found id: "acd73a72a0a4ab35dee77374f67abf27b8ef2df3b345ba0daf443d855d262c41"
	I1026 15:08:43.781335 1041023 cri.go:89] found id: "2ca34ae5c7a132c932b0fe42c366c6c93ec5457e9f726c9b563a57bc22191508"
	I1026 15:08:43.781339 1041023 cri.go:89] found id: ""
	I1026 15:08:43.781383 1041023 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:08:43.797463 1041023 out.go:203] 
	W1026 15:08:43.799042 1041023 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:08:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:08:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:08:43.799060 1041023 out.go:285] * 
	* 
	W1026 15:08:43.804504 1041023 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:08:43.806102 1041023 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-212674 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-212674
helpers_test.go:243: (dbg) docker inspect pause-212674:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a",
	        "Created": "2025-10-26T15:07:26.336720441Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021897,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:07:27.388133268Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a-json.log",
	        "Name": "/pause-212674",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-212674:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-212674",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a",
	                "LowerDir": "/var/lib/docker/overlay2/ef6c3b48f7149b039426bea9be98a085117ba997d4ba75fd37608a05642d0dcd-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ef6c3b48f7149b039426bea9be98a085117ba997d4ba75fd37608a05642d0dcd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ef6c3b48f7149b039426bea9be98a085117ba997d4ba75fd37608a05642d0dcd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ef6c3b48f7149b039426bea9be98a085117ba997d4ba75fd37608a05642d0dcd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-212674",
	                "Source": "/var/lib/docker/volumes/pause-212674/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-212674",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-212674",
	                "name.minikube.sigs.k8s.io": "pause-212674",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84213ba35b4f42a410ecc9bdca465ba7d4d35d015a00d1fc3ca62b3d1154a33f",
	            "SandboxKey": "/var/run/docker/netns/84213ba35b4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33742"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33743"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33746"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33744"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33745"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-212674": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:76:ae:57:aa:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1d124b296e93d8b14e472b46dfbd89b4fa531c99b4a5158ee9b770764d77fd96",
	                    "EndpointID": "bd9edbcd1259e4adb9c78e22e07475caa6636660e3638ad6fadcb2cb536dabae",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-212674",
	                        "377ffdea3264"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-212674 -n pause-212674
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-212674 -n pause-212674: exit status 2 (358.025286ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-212674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-212674 logs -n 25: (1.053314163s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-269512 --schedule 5m                                                                                                   │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --cancel-scheduled                                                                                              │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │ 26 Oct 25 15:05 UTC │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:06 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:06 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:06 UTC │ 26 Oct 25 15:06 UTC │
	│ delete  │ -p scheduled-stop-269512                                                                                                                 │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:06 UTC │ 26 Oct 25 15:06 UTC │
	│ start   │ -p insufficient-storage-263685 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-263685 │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │                     │
	│ delete  │ -p insufficient-storage-263685                                                                                                           │ insufficient-storage-263685 │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:07 UTC │
	│ start   │ -p offline-crio-100892 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-100892         │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p pause-212674 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-212674                │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-176599   │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:07 UTC │
	│ start   │ -p missing-upgrade-374022 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-374022      │ jenkins │ v1.32.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:07 UTC │
	│ stop    │ -p kubernetes-upgrade-176599                                                                                                             │ kubernetes-upgrade-176599   │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:07 UTC │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-176599   │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │                     │
	│ start   │ -p missing-upgrade-374022 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-374022      │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ delete  │ -p offline-crio-100892                                                                                                                   │ offline-crio-100892         │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p running-upgrade-917646 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-917646      │ jenkins │ v1.32.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p pause-212674 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-212674                │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p running-upgrade-917646 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-917646      │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │                     │
	│ delete  │ -p missing-upgrade-374022                                                                                                                │ missing-upgrade-374022      │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ pause   │ -p pause-212674 --alsologtostderr -v=5                                                                                                   │ pause-212674                │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │                     │
	│ start   │ -p stopped-upgrade-886432 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-886432      │ jenkins │ v1.32.0 │ 26 Oct 25 15:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:08:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:08:43.576818 1041604 out.go:296] Setting OutFile to fd 1 ...
	I1026 15:08:43.577005 1041604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 15:08:43.577011 1041604 out.go:309] Setting ErrFile to fd 2...
	I1026 15:08:43.577018 1041604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 15:08:43.577364 1041604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:08:43.577877 1041604 out.go:303] Setting JSON to false
	I1026 15:08:43.579121 1041604 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10272,"bootTime":1761481052,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:08:43.579217 1041604 start.go:138] virtualization: kvm guest
	I1026 15:08:43.581399 1041604 out.go:177] * [stopped-upgrade-886432] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:08:43.582833 1041604 out.go:177]   - MINIKUBE_LOCATION=21664
	I1026 15:08:43.582928 1041604 notify.go:220] Checking for updates...
	I1026 15:08:43.584105 1041604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:08:43.585642 1041604 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:08:43.587753 1041604 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:08:43.589016 1041604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:08:43.590153 1041604 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig1032558125
	I1026 15:08:43.595446 1041604 config.go:182] Loaded profile config "kubernetes-upgrade-176599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:08:43.595636 1041604 config.go:182] Loaded profile config "pause-212674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:08:43.595750 1041604 config.go:182] Loaded profile config "running-upgrade-917646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 15:08:43.595871 1041604 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 15:08:43.629060 1041604 docker.go:122] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:08:43.629208 1041604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:08:43.695497 1041604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:08:43.683690768 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:08:43.695595 1041604 docker.go:295] overlay module found
	I1026 15:08:43.698120 1041604 out.go:177] * Using the docker driver based on user configuration
	I1026 15:08:43.700086 1041604 start.go:298] selected driver: docker
	I1026 15:08:43.700095 1041604 start.go:902] validating driver "docker" against <nil>
	I1026 15:08:43.700119 1041604 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:08:43.700700 1041604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:08:43.767267 1041604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:08:43.756353936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:08:43.767497 1041604 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1026 15:08:43.767795 1041604 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 15:08:43.772275 1041604 out.go:177] * Using Docker driver with root privileges
	I1026 15:08:43.773565 1041604 cni.go:84] Creating CNI manager for ""
	I1026 15:08:43.773581 1041604 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:08:43.773592 1041604 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:08:43.773602 1041604 start_flags.go:323] config:
	{Name:stopped-upgrade-886432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-886432 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 15:08:43.775063 1041604 out.go:177] * Starting control plane node stopped-upgrade-886432 in cluster stopped-upgrade-886432
	I1026 15:08:43.776307 1041604 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 15:08:43.777539 1041604 out.go:177] * Pulling base image ...
	I1026 15:08:43.779187 1041604 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 15:08:43.779232 1041604 preload.go:148] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1026 15:08:43.779253 1041604 cache.go:56] Caching tarball of preloaded images
	I1026 15:08:43.779275 1041604 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1026 15:08:43.779352 1041604 preload.go:174] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:08:43.779361 1041604 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1026 15:08:43.779506 1041604 profile.go:148] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/stopped-upgrade-886432/config.json ...
	I1026 15:08:43.779525 1041604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/stopped-upgrade-886432/config.json: {Name:mk3c62821d5d8fb38e1f67db7c60cfa9ed80751e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:08:43.798289 1041604 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1026 15:08:43.798316 1041604 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1026 15:08:43.798346 1041604 cache.go:194] Successfully downloaded all kic artifacts
	I1026 15:08:43.798391 1041604 start.go:365] acquiring machines lock for stopped-upgrade-886432: {Name:mk37e3241add2102f98fec7d7f1a0b73d329b120 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:08:43.798506 1041604 start.go:369] acquired machines lock for "stopped-upgrade-886432" in 92.523µs
	I1026 15:08:43.798539 1041604 start.go:93] Provisioning new machine with config: &{Name:stopped-upgrade-886432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-886432 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:08:43.798647 1041604 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.701541066Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.702371706Z" level=info msg="Conmon does support the --sync option"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.702395654Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.702408995Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.703139914Z" level=info msg="Conmon does support the --sync option"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.703158689Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.707467551Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.707500467Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.708021324Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.708476174Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.708533692Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.714203016Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.760863736Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-lhn84 Namespace:kube-system ID:e9717b72b07cac6c62cfee2bb58abf60a615035b408f369f8aa93dc96efbfb7d UID:fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c NetNS:/var/run/netns/9b07ab87-c9d6-4c61-bbea-c82e5fd5c978 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000990120}] Aliases:map[]}"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761061908Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-lhn84 for CNI network kindnet (type=ptp)"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761553006Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761573822Z" level=info msg="Starting seccomp notifier watcher"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761628073Z" level=info msg="Create NRI interface"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761725385Z" level=info msg="built-in NRI default validator is disabled"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761739161Z" level=info msg="runtime interface created"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761752989Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761759546Z" level=info msg="runtime interface starting up..."
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761767176Z" level=info msg="starting plugins..."
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761781644Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.762261564Z" level=info msg="No systemd watchdog enabled"
	Oct 26 15:08:38 pause-212674 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	76a92c2837df2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago       Running             coredns                   0                   e9717b72b07ca       coredns-66bc5c9577-lhn84               kube-system
	8eb50db59ee41       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   52 seconds ago       Running             kube-proxy                0                   bab37c6ab94e2       kube-proxy-99d97                       kube-system
	7e4f98966522c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   52 seconds ago       Running             kindnet-cni               0                   a2496238005ce       kindnet-bjh7x                          kube-system
	3dc0a89ae51a3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   58b8c3d4c17fa       kube-apiserver-pause-212674            kube-system
	a660a07e95a04       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   5c9ce9561ea8f       kube-controller-manager-pause-212674   kube-system
	acd73a72a0a4a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   a0ee9564f2c6c       etcd-pause-212674                      kube-system
	2ca34ae5c7a13       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   51af68ddcdf7e       kube-scheduler-pause-212674            kube-system
	
	
	==> coredns [76a92c2837df2cc25dc74f22756d489ca379c485519fd06a0bfe92725896ccda] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55040 - 39294 "HINFO IN 7554385736374659516.1494010492529213197. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.471665612s
	
	
	==> describe nodes <==
	Name:               pause-212674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-212674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=pause-212674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_07_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:07:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-212674
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:08:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:08:37 +0000   Sun, 26 Oct 2025 15:07:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:08:37 +0000   Sun, 26 Oct 2025 15:07:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:08:37 +0000   Sun, 26 Oct 2025 15:07:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:08:37 +0000   Sun, 26 Oct 2025 15:08:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-212674
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d0a35729-da6a-4c5b-aefa-8cad991b17a2
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-lhn84                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     53s
	  kube-system                 etcd-pause-212674                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         58s
	  kube-system                 kindnet-bjh7x                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      53s
	  kube-system                 kube-apiserver-pause-212674             250m (3%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-controller-manager-pause-212674    200m (2%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-99d97                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-pause-212674             100m (1%)     0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 52s   kube-proxy       
	  Normal  Starting                 58s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s   kubelet          Node pause-212674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s   kubelet          Node pause-212674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s   kubelet          Node pause-212674 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s   node-controller  Node pause-212674 event: Registered Node pause-212674 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-212674 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [acd73a72a0a4ab35dee77374f67abf27b8ef2df3b345ba0daf443d855d262c41] <==
	{"level":"warn","ts":"2025-10-26T15:07:43.128893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.136915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.144804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.153567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.160643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.167512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.176377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.182933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.189571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.196550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.203157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.209743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.217925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.224601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.231766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.239403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.246881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.257348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.270466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.277380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.287280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.332748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T15:08:00.142440Z","caller":"traceutil/trace.go:172","msg":"trace[294195410] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"218.747579ms","start":"2025-10-26T15:07:59.923673Z","end":"2025-10-26T15:08:00.142420Z","steps":["trace[294195410] 'process raft request'  (duration: 218.54571ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:08:17.847274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.620175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-212674\" limit:1 ","response":"range_response_count:1 size:5582"}
	{"level":"info","ts":"2025-10-26T15:08:17.847382Z","caller":"traceutil/trace.go:172","msg":"trace[1109080119] range","detail":"{range_begin:/registry/minions/pause-212674; range_end:; response_count:1; response_revision:389; }","duration":"105.743495ms","start":"2025-10-26T15:08:17.741616Z","end":"2025-10-26T15:08:17.847359Z","steps":["trace[1109080119] 'range keys from in-memory index tree'  (duration: 105.541121ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:08:45 up  2:51,  0 user,  load average: 3.91, 1.97, 1.31
	Linux pause-212674 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e4f98966522c866a71e7bb28586342f5501fbd080d5cf5c1ab2482d0f8c18b4] <==
	I1026 15:07:52.141486       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:07:52.141868       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1026 15:07:52.142102       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:07:52.142131       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:07:52.142246       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:07:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:07:52.395137       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:07:52.395208       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:07:52.395220       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:07:52.395345       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:08:22.396978       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 15:08:22.397043       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:08:22.397058       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:08:22.396720       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 15:08:23.995485       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:08:23.995525       1 metrics.go:72] Registering metrics
	I1026 15:08:23.995637       1 controller.go:711] "Syncing nftables rules"
	I1026 15:08:32.402278       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 15:08:32.402314       1 main.go:301] handling current node
	I1026 15:08:42.398304       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 15:08:42.398386       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3dc0a89ae51a3f687143967862912464f2f52ea10b2157c46206722f6aa5fa35] <==
	I1026 15:07:43.862915       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:07:43.869601       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:07:43.869653       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:07:43.873951       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:07:43.874198       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:07:43.874309       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:07:43.878804       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:07:44.041941       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:07:44.750072       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:07:44.754073       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:07:44.754093       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:07:45.502864       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:07:45.551287       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:07:45.659576       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:07:45.671279       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1026 15:07:45.672740       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:07:45.678427       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:07:45.792111       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:07:46.735350       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:07:46.748692       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:07:46.764574       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:07:51.095634       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:07:51.543940       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:07:51.792228       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:07:51.796474       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a660a07e95a04255795b415a311bea966b4e1dc2146f0a58983099689e76a894] <==
	I1026 15:07:50.788830       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 15:07:50.788830       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:07:50.788858       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:07:50.789264       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:07:50.790238       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:07:50.790253       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:07:50.791882       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:07:50.794188       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:07:50.794267       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:07:50.796505       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:07:50.796523       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:07:50.796570       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:07:50.796608       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:07:50.796612       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:07:50.796617       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:07:50.801902       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:07:50.804404       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-212674" podCIDRs=["10.244.0.0/24"]
	I1026 15:07:50.807059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:07:50.812256       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:07:50.829669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:07:50.834013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:07:50.837403       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:07:50.837428       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:07:50.837443       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:08:35.742509       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8eb50db59ee41b8577135e83e629f9d3fb42e5c56de444ab31866918de5c351c] <==
	I1026 15:07:52.038058       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:07:52.099708       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:07:52.200274       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:07:52.200328       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1026 15:07:52.200426       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:07:52.221388       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:07:52.221461       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:07:52.227674       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:07:52.228119       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:07:52.228157       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:07:52.229687       1 config.go:200] "Starting service config controller"
	I1026 15:07:52.229712       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:07:52.229777       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:07:52.229808       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:07:52.230463       1 config.go:309] "Starting node config controller"
	I1026 15:07:52.230576       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:07:52.230589       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:07:52.230921       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:07:52.230949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:07:52.329885       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:07:52.331099       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:07:52.331111       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2ca34ae5c7a132c932b0fe42c366c6c93ec5457e9f726c9b563a57bc22191508] <==
	E1026 15:07:43.803995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:07:43.804133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:07:43.804279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:07:43.804222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:07:43.804396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:07:43.804471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:07:43.804494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:07:43.804637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:07:44.634965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:07:44.682556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:07:44.684590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:07:44.725411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:07:44.736975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:07:44.792225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:07:44.805149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:07:44.828776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:07:44.835401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:07:44.845058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:07:44.896331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:07:44.974735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:07:45.045348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 15:07:45.212428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:07:45.217985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:07:45.221698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1026 15:07:47.404263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:07:51 pause-212674 kubelet[1293]: I1026 15:07:51.644419    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvhx9\" (UniqueName: \"kubernetes.io/projected/7f41d1e4-dda2-49ea-8eab-5c7f7c6eb03e-kube-api-access-xvhx9\") pod \"kindnet-bjh7x\" (UID: \"7f41d1e4-dda2-49ea-8eab-5c7f7c6eb03e\") " pod="kube-system/kindnet-bjh7x"
	Oct 26 15:07:51 pause-212674 kubelet[1293]: I1026 15:07:51.644436    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee5c15db-0142-42d7-bf85-d700a411e536-kube-proxy\") pod \"kube-proxy-99d97\" (UID: \"ee5c15db-0142-42d7-bf85-d700a411e536\") " pod="kube-system/kube-proxy-99d97"
	Oct 26 15:07:51 pause-212674 kubelet[1293]: I1026 15:07:51.644449    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee5c15db-0142-42d7-bf85-d700a411e536-lib-modules\") pod \"kube-proxy-99d97\" (UID: \"ee5c15db-0142-42d7-bf85-d700a411e536\") " pod="kube-system/kube-proxy-99d97"
	Oct 26 15:07:51 pause-212674 kubelet[1293]: I1026 15:07:51.644463    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f41d1e4-dda2-49ea-8eab-5c7f7c6eb03e-xtables-lock\") pod \"kindnet-bjh7x\" (UID: \"7f41d1e4-dda2-49ea-8eab-5c7f7c6eb03e\") " pod="kube-system/kindnet-bjh7x"
	Oct 26 15:07:52 pause-212674 kubelet[1293]: I1026 15:07:52.731148    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-99d97" podStartSLOduration=1.7311305319999999 podStartE2EDuration="1.731130532s" podCreationTimestamp="2025-10-26 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:07:52.730980647 +0000 UTC m=+6.234321533" watchObservedRunningTime="2025-10-26 15:07:52.731130532 +0000 UTC m=+6.234471416"
	Oct 26 15:07:52 pause-212674 kubelet[1293]: I1026 15:07:52.742127    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bjh7x" podStartSLOduration=1.742105955 podStartE2EDuration="1.742105955s" podCreationTimestamp="2025-10-26 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:07:52.741888344 +0000 UTC m=+6.245229243" watchObservedRunningTime="2025-10-26 15:07:52.742105955 +0000 UTC m=+6.245446840"
	Oct 26 15:08:32 pause-212674 kubelet[1293]: I1026 15:08:32.733743    1293 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 15:08:32 pause-212674 kubelet[1293]: I1026 15:08:32.837424    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhsg4\" (UniqueName: \"kubernetes.io/projected/fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c-kube-api-access-qhsg4\") pod \"coredns-66bc5c9577-lhn84\" (UID: \"fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c\") " pod="kube-system/coredns-66bc5c9577-lhn84"
	Oct 26 15:08:32 pause-212674 kubelet[1293]: I1026 15:08:32.837517    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c-config-volume\") pod \"coredns-66bc5c9577-lhn84\" (UID: \"fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c\") " pod="kube-system/coredns-66bc5c9577-lhn84"
	Oct 26 15:08:33 pause-212674 kubelet[1293]: I1026 15:08:33.821216    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lhn84" podStartSLOduration=42.821193761 podStartE2EDuration="42.821193761s" podCreationTimestamp="2025-10-26 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:08:33.820898412 +0000 UTC m=+47.324239334" watchObservedRunningTime="2025-10-26 15:08:33.821193761 +0000 UTC m=+47.324534649"
	Oct 26 15:08:36 pause-212674 kubelet[1293]: W1026 15:08:36.816507    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 15:08:36 pause-212674 kubelet[1293]: E1026 15:08:36.816646    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 26 15:08:36 pause-212674 kubelet[1293]: E1026 15:08:36.816696    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:36 pause-212674 kubelet[1293]: E1026 15:08:36.816710    1293 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:36 pause-212674 kubelet[1293]: W1026 15:08:36.917266    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 15:08:38 pause-212674 kubelet[1293]: W1026 15:08:38.668949    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 15:08:38 pause-212674 kubelet[1293]: E1026 15:08:38.669036    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 26 15:08:38 pause-212674 kubelet[1293]: E1026 15:08:38.669124    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:38 pause-212674 kubelet[1293]: E1026 15:08:38.669145    1293 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:38 pause-212674 kubelet[1293]: E1026 15:08:38.669192    1293 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:42 pause-212674 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:08:42 pause-212674 kubelet[1293]: I1026 15:08:42.468516    1293 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 15:08:42 pause-212674 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:08:42 pause-212674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:08:42 pause-212674 systemd[1]: kubelet.service: Consumed 2.362s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-212674 -n pause-212674
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-212674 -n pause-212674: exit status 2 (340.930941ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-212674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-212674
helpers_test.go:243: (dbg) docker inspect pause-212674:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a",
	        "Created": "2025-10-26T15:07:26.336720441Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021897,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:07:27.388133268Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a/hosts",
	        "LogPath": "/var/lib/docker/containers/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a/377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a-json.log",
	        "Name": "/pause-212674",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-212674:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-212674",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "377ffdea3264a63ebdfde348993e64c0273e87d2b6cf6304a0d31165afc16e5a",
	                "LowerDir": "/var/lib/docker/overlay2/ef6c3b48f7149b039426bea9be98a085117ba997d4ba75fd37608a05642d0dcd-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ef6c3b48f7149b039426bea9be98a085117ba997d4ba75fd37608a05642d0dcd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ef6c3b48f7149b039426bea9be98a085117ba997d4ba75fd37608a05642d0dcd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ef6c3b48f7149b039426bea9be98a085117ba997d4ba75fd37608a05642d0dcd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-212674",
	                "Source": "/var/lib/docker/volumes/pause-212674/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-212674",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-212674",
	                "name.minikube.sigs.k8s.io": "pause-212674",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84213ba35b4f42a410ecc9bdca465ba7d4d35d015a00d1fc3ca62b3d1154a33f",
	            "SandboxKey": "/var/run/docker/netns/84213ba35b4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33742"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33743"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33746"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33744"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33745"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-212674": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:76:ae:57:aa:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1d124b296e93d8b14e472b46dfbd89b4fa531c99b4a5158ee9b770764d77fd96",
	                    "EndpointID": "bd9edbcd1259e4adb9c78e22e07475caa6636660e3638ad6fadcb2cb536dabae",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-212674",
	                        "377ffdea3264"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-212674 -n pause-212674
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-212674 -n pause-212674: exit status 2 (351.706177ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-212674 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-269512 --schedule 5m                                                                                                   │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --cancel-scheduled                                                                                              │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │ 26 Oct 25 15:05 UTC │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:06 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:06 UTC │                     │
	│ stop    │ -p scheduled-stop-269512 --schedule 15s                                                                                                  │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:06 UTC │ 26 Oct 25 15:06 UTC │
	│ delete  │ -p scheduled-stop-269512                                                                                                                 │ scheduled-stop-269512       │ jenkins │ v1.37.0 │ 26 Oct 25 15:06 UTC │ 26 Oct 25 15:06 UTC │
	│ start   │ -p insufficient-storage-263685 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-263685 │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │                     │
	│ delete  │ -p insufficient-storage-263685                                                                                                           │ insufficient-storage-263685 │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:07 UTC │
	│ start   │ -p offline-crio-100892 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-100892         │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p pause-212674 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-212674                │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-176599   │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:07 UTC │
	│ start   │ -p missing-upgrade-374022 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-374022      │ jenkins │ v1.32.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:07 UTC │
	│ stop    │ -p kubernetes-upgrade-176599                                                                                                             │ kubernetes-upgrade-176599   │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │ 26 Oct 25 15:07 UTC │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-176599   │ jenkins │ v1.37.0 │ 26 Oct 25 15:07 UTC │                     │
	│ start   │ -p missing-upgrade-374022 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-374022      │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ delete  │ -p offline-crio-100892                                                                                                                   │ offline-crio-100892         │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p running-upgrade-917646 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-917646      │ jenkins │ v1.32.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p pause-212674 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-212674                │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ start   │ -p running-upgrade-917646 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-917646      │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │                     │
	│ delete  │ -p missing-upgrade-374022                                                                                                                │ missing-upgrade-374022      │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │ 26 Oct 25 15:08 UTC │
	│ pause   │ -p pause-212674 --alsologtostderr -v=5                                                                                                   │ pause-212674                │ jenkins │ v1.37.0 │ 26 Oct 25 15:08 UTC │                     │
	│ start   │ -p stopped-upgrade-886432 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-886432      │ jenkins │ v1.32.0 │ 26 Oct 25 15:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:08:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:08:43.576818 1041604 out.go:296] Setting OutFile to fd 1 ...
	I1026 15:08:43.577005 1041604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 15:08:43.577011 1041604 out.go:309] Setting ErrFile to fd 2...
	I1026 15:08:43.577018 1041604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1026 15:08:43.577364 1041604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:08:43.577877 1041604 out.go:303] Setting JSON to false
	I1026 15:08:43.579121 1041604 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10272,"bootTime":1761481052,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:08:43.579217 1041604 start.go:138] virtualization: kvm guest
	I1026 15:08:43.581399 1041604 out.go:177] * [stopped-upgrade-886432] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:08:43.582833 1041604 out.go:177]   - MINIKUBE_LOCATION=21664
	I1026 15:08:43.582928 1041604 notify.go:220] Checking for updates...
	I1026 15:08:43.584105 1041604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:08:43.585642 1041604 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:08:43.587753 1041604 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:08:43.589016 1041604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:08:43.590153 1041604 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig1032558125
	I1026 15:08:43.595446 1041604 config.go:182] Loaded profile config "kubernetes-upgrade-176599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:08:43.595636 1041604 config.go:182] Loaded profile config "pause-212674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:08:43.595750 1041604 config.go:182] Loaded profile config "running-upgrade-917646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 15:08:43.595871 1041604 driver.go:378] Setting default libvirt URI to qemu:///system
	I1026 15:08:43.629060 1041604 docker.go:122] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:08:43.629208 1041604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:08:43.695497 1041604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:08:43.683690768 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:08:43.695595 1041604 docker.go:295] overlay module found
	I1026 15:08:43.698120 1041604 out.go:177] * Using the docker driver based on user configuration
	I1026 15:08:43.700086 1041604 start.go:298] selected driver: docker
	I1026 15:08:43.700095 1041604 start.go:902] validating driver "docker" against <nil>
	I1026 15:08:43.700119 1041604 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:08:43.700700 1041604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:08:43.767267 1041604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:08:43.756353936 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:08:43.767497 1041604 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1026 15:08:43.767795 1041604 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 15:08:43.772275 1041604 out.go:177] * Using Docker driver with root privileges
	I1026 15:08:43.773565 1041604 cni.go:84] Creating CNI manager for ""
	I1026 15:08:43.773581 1041604 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:08:43.773592 1041604 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:08:43.773602 1041604 start_flags.go:323] config:
	{Name:stopped-upgrade-886432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-886432 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 15:08:43.775063 1041604 out.go:177] * Starting control plane node stopped-upgrade-886432 in cluster stopped-upgrade-886432
	I1026 15:08:43.776307 1041604 cache.go:121] Beginning downloading kic base image for docker with crio
	I1026 15:08:43.777539 1041604 out.go:177] * Pulling base image ...
	I1026 15:08:43.779187 1041604 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 15:08:43.779232 1041604 preload.go:148] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1026 15:08:43.779253 1041604 cache.go:56] Caching tarball of preloaded images
	I1026 15:08:43.779275 1041604 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1026 15:08:43.779352 1041604 preload.go:174] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:08:43.779361 1041604 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1026 15:08:43.779506 1041604 profile.go:148] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/stopped-upgrade-886432/config.json ...
	I1026 15:08:43.779525 1041604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/stopped-upgrade-886432/config.json: {Name:mk3c62821d5d8fb38e1f67db7c60cfa9ed80751e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:08:43.798289 1041604 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1026 15:08:43.798316 1041604 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1026 15:08:43.798346 1041604 cache.go:194] Successfully downloaded all kic artifacts
	I1026 15:08:43.798391 1041604 start.go:365] acquiring machines lock for stopped-upgrade-886432: {Name:mk37e3241add2102f98fec7d7f1a0b73d329b120 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:08:43.798506 1041604 start.go:369] acquired machines lock for "stopped-upgrade-886432" in 92.523µs
	I1026 15:08:43.798539 1041604 start.go:93] Provisioning new machine with config: &{Name:stopped-upgrade-886432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-886432 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:08:43.798647 1041604 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:08:45.218349 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 15:08:45.218388 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	
	
	==> CRI-O <==
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.701541066Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.702371706Z" level=info msg="Conmon does support the --sync option"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.702395654Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.702408995Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.703139914Z" level=info msg="Conmon does support the --sync option"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.703158689Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.707467551Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.707500467Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.708021324Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.708476174Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.708533692Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.714203016Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.760863736Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-lhn84 Namespace:kube-system ID:e9717b72b07cac6c62cfee2bb58abf60a615035b408f369f8aa93dc96efbfb7d UID:fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c NetNS:/var/run/netns/9b07ab87-c9d6-4c61-bbea-c82e5fd5c978 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000990120}] Aliases:map[]}"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761061908Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-lhn84 for CNI network kindnet (type=ptp)"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761553006Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761573822Z" level=info msg="Starting seccomp notifier watcher"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761628073Z" level=info msg="Create NRI interface"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761725385Z" level=info msg="built-in NRI default validator is disabled"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761739161Z" level=info msg="runtime interface created"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761752989Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761759546Z" level=info msg="runtime interface starting up..."
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761767176Z" level=info msg="starting plugins..."
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.761781644Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 26 15:08:38 pause-212674 crio[2137]: time="2025-10-26T15:08:38.762261564Z" level=info msg="No systemd watchdog enabled"
	Oct 26 15:08:38 pause-212674 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	76a92c2837df2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago       Running             coredns                   0                   e9717b72b07ca       coredns-66bc5c9577-lhn84               kube-system
	8eb50db59ee41       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   54 seconds ago       Running             kube-proxy                0                   bab37c6ab94e2       kube-proxy-99d97                       kube-system
	7e4f98966522c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   54 seconds ago       Running             kindnet-cni               0                   a2496238005ce       kindnet-bjh7x                          kube-system
	3dc0a89ae51a3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   58b8c3d4c17fa       kube-apiserver-pause-212674            kube-system
	a660a07e95a04       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   5c9ce9561ea8f       kube-controller-manager-pause-212674   kube-system
	acd73a72a0a4a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   a0ee9564f2c6c       etcd-pause-212674                      kube-system
	2ca34ae5c7a13       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   51af68ddcdf7e       kube-scheduler-pause-212674            kube-system
	
	
	==> coredns [76a92c2837df2cc25dc74f22756d489ca379c485519fd06a0bfe92725896ccda] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55040 - 39294 "HINFO IN 7554385736374659516.1494010492529213197. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.471665612s
	
	
	==> describe nodes <==
	Name:               pause-212674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-212674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=pause-212674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_07_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:07:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-212674
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:08:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:08:37 +0000   Sun, 26 Oct 2025 15:07:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:08:37 +0000   Sun, 26 Oct 2025 15:07:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:08:37 +0000   Sun, 26 Oct 2025 15:07:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:08:37 +0000   Sun, 26 Oct 2025 15:08:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-212674
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d0a35729-da6a-4c5b-aefa-8cad991b17a2
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-lhn84                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     55s
	  kube-system                 etcd-pause-212674                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-bjh7x                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-pause-212674             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-pause-212674    200m (2%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-99d97                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-pause-212674             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 54s   kube-proxy       
	  Normal  Starting                 60s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s   kubelet          Node pause-212674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s   kubelet          Node pause-212674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s   kubelet          Node pause-212674 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s   node-controller  Node pause-212674 event: Registered Node pause-212674 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-212674 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [acd73a72a0a4ab35dee77374f67abf27b8ef2df3b345ba0daf443d855d262c41] <==
	{"level":"warn","ts":"2025-10-26T15:07:43.128893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.136915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.144804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.153567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.160643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.167512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.176377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.182933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.189571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.196550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.203157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.209743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.217925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.224601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.231766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.239403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.246881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.257348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.270466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.277380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.287280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:07:43.332748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59150","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T15:08:00.142440Z","caller":"traceutil/trace.go:172","msg":"trace[294195410] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"218.747579ms","start":"2025-10-26T15:07:59.923673Z","end":"2025-10-26T15:08:00.142420Z","steps":["trace[294195410] 'process raft request'  (duration: 218.54571ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:08:17.847274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.620175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-212674\" limit:1 ","response":"range_response_count:1 size:5582"}
	{"level":"info","ts":"2025-10-26T15:08:17.847382Z","caller":"traceutil/trace.go:172","msg":"trace[1109080119] range","detail":"{range_begin:/registry/minions/pause-212674; range_end:; response_count:1; response_revision:389; }","duration":"105.743495ms","start":"2025-10-26T15:08:17.741616Z","end":"2025-10-26T15:08:17.847359Z","steps":["trace[1109080119] 'range keys from in-memory index tree'  (duration: 105.541121ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:08:46 up  2:51,  0 user,  load average: 3.91, 1.97, 1.31
	Linux pause-212674 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e4f98966522c866a71e7bb28586342f5501fbd080d5cf5c1ab2482d0f8c18b4] <==
	I1026 15:07:52.141486       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:07:52.141868       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1026 15:07:52.142102       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:07:52.142131       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:07:52.142246       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:07:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:07:52.395137       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:07:52.395208       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:07:52.395220       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:07:52.395345       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:08:22.396978       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 15:08:22.397043       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:08:22.397058       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:08:22.396720       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 15:08:23.995485       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:08:23.995525       1 metrics.go:72] Registering metrics
	I1026 15:08:23.995637       1 controller.go:711] "Syncing nftables rules"
	I1026 15:08:32.402278       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 15:08:32.402314       1 main.go:301] handling current node
	I1026 15:08:42.398304       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 15:08:42.398386       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3dc0a89ae51a3f687143967862912464f2f52ea10b2157c46206722f6aa5fa35] <==
	I1026 15:07:43.862915       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:07:43.869601       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:07:43.869653       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:07:43.873951       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:07:43.874198       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:07:43.874309       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:07:43.878804       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:07:44.041941       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:07:44.750072       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:07:44.754073       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:07:44.754093       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:07:45.502864       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:07:45.551287       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:07:45.659576       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:07:45.671279       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1026 15:07:45.672740       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:07:45.678427       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:07:45.792111       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:07:46.735350       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:07:46.748692       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:07:46.764574       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:07:51.095634       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:07:51.543940       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:07:51.792228       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:07:51.796474       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a660a07e95a04255795b415a311bea966b4e1dc2146f0a58983099689e76a894] <==
	I1026 15:07:50.788830       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 15:07:50.788830       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:07:50.788858       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:07:50.789264       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:07:50.790238       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:07:50.790253       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:07:50.791882       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:07:50.794188       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:07:50.794267       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:07:50.796505       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:07:50.796523       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:07:50.796570       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:07:50.796608       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:07:50.796612       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:07:50.796617       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:07:50.801902       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:07:50.804404       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-212674" podCIDRs=["10.244.0.0/24"]
	I1026 15:07:50.807059       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:07:50.812256       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:07:50.829669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:07:50.834013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:07:50.837403       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:07:50.837428       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:07:50.837443       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:08:35.742509       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8eb50db59ee41b8577135e83e629f9d3fb42e5c56de444ab31866918de5c351c] <==
	I1026 15:07:52.038058       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:07:52.099708       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:07:52.200274       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:07:52.200328       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1026 15:07:52.200426       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:07:52.221388       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:07:52.221461       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:07:52.227674       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:07:52.228119       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:07:52.228157       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:07:52.229687       1 config.go:200] "Starting service config controller"
	I1026 15:07:52.229712       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:07:52.229777       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:07:52.229808       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:07:52.230463       1 config.go:309] "Starting node config controller"
	I1026 15:07:52.230576       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:07:52.230589       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:07:52.230921       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:07:52.230949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:07:52.329885       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:07:52.331099       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:07:52.331111       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2ca34ae5c7a132c932b0fe42c366c6c93ec5457e9f726c9b563a57bc22191508] <==
	E1026 15:07:43.803995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:07:43.804133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:07:43.804279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:07:43.804222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:07:43.804396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:07:43.804471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:07:43.804494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:07:43.804637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:07:44.634965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:07:44.682556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:07:44.684590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:07:44.725411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:07:44.736975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:07:44.792225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:07:44.805149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:07:44.828776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:07:44.835401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:07:44.845058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:07:44.896331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:07:44.974735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:07:45.045348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 15:07:45.212428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:07:45.217985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:07:45.221698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1026 15:07:47.404263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:07:51 pause-212674 kubelet[1293]: I1026 15:07:51.644419    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvhx9\" (UniqueName: \"kubernetes.io/projected/7f41d1e4-dda2-49ea-8eab-5c7f7c6eb03e-kube-api-access-xvhx9\") pod \"kindnet-bjh7x\" (UID: \"7f41d1e4-dda2-49ea-8eab-5c7f7c6eb03e\") " pod="kube-system/kindnet-bjh7x"
	Oct 26 15:07:51 pause-212674 kubelet[1293]: I1026 15:07:51.644436    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee5c15db-0142-42d7-bf85-d700a411e536-kube-proxy\") pod \"kube-proxy-99d97\" (UID: \"ee5c15db-0142-42d7-bf85-d700a411e536\") " pod="kube-system/kube-proxy-99d97"
	Oct 26 15:07:51 pause-212674 kubelet[1293]: I1026 15:07:51.644449    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee5c15db-0142-42d7-bf85-d700a411e536-lib-modules\") pod \"kube-proxy-99d97\" (UID: \"ee5c15db-0142-42d7-bf85-d700a411e536\") " pod="kube-system/kube-proxy-99d97"
	Oct 26 15:07:51 pause-212674 kubelet[1293]: I1026 15:07:51.644463    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f41d1e4-dda2-49ea-8eab-5c7f7c6eb03e-xtables-lock\") pod \"kindnet-bjh7x\" (UID: \"7f41d1e4-dda2-49ea-8eab-5c7f7c6eb03e\") " pod="kube-system/kindnet-bjh7x"
	Oct 26 15:07:52 pause-212674 kubelet[1293]: I1026 15:07:52.731148    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-99d97" podStartSLOduration=1.7311305319999999 podStartE2EDuration="1.731130532s" podCreationTimestamp="2025-10-26 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:07:52.730980647 +0000 UTC m=+6.234321533" watchObservedRunningTime="2025-10-26 15:07:52.731130532 +0000 UTC m=+6.234471416"
	Oct 26 15:07:52 pause-212674 kubelet[1293]: I1026 15:07:52.742127    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bjh7x" podStartSLOduration=1.742105955 podStartE2EDuration="1.742105955s" podCreationTimestamp="2025-10-26 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:07:52.741888344 +0000 UTC m=+6.245229243" watchObservedRunningTime="2025-10-26 15:07:52.742105955 +0000 UTC m=+6.245446840"
	Oct 26 15:08:32 pause-212674 kubelet[1293]: I1026 15:08:32.733743    1293 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 15:08:32 pause-212674 kubelet[1293]: I1026 15:08:32.837424    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhsg4\" (UniqueName: \"kubernetes.io/projected/fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c-kube-api-access-qhsg4\") pod \"coredns-66bc5c9577-lhn84\" (UID: \"fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c\") " pod="kube-system/coredns-66bc5c9577-lhn84"
	Oct 26 15:08:32 pause-212674 kubelet[1293]: I1026 15:08:32.837517    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c-config-volume\") pod \"coredns-66bc5c9577-lhn84\" (UID: \"fd7b2cf8-0d3e-48ee-8c60-a27936f4cb3c\") " pod="kube-system/coredns-66bc5c9577-lhn84"
	Oct 26 15:08:33 pause-212674 kubelet[1293]: I1026 15:08:33.821216    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lhn84" podStartSLOduration=42.821193761 podStartE2EDuration="42.821193761s" podCreationTimestamp="2025-10-26 15:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:08:33.820898412 +0000 UTC m=+47.324239334" watchObservedRunningTime="2025-10-26 15:08:33.821193761 +0000 UTC m=+47.324534649"
	Oct 26 15:08:36 pause-212674 kubelet[1293]: W1026 15:08:36.816507    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 15:08:36 pause-212674 kubelet[1293]: E1026 15:08:36.816646    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 26 15:08:36 pause-212674 kubelet[1293]: E1026 15:08:36.816696    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:36 pause-212674 kubelet[1293]: E1026 15:08:36.816710    1293 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:36 pause-212674 kubelet[1293]: W1026 15:08:36.917266    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 15:08:38 pause-212674 kubelet[1293]: W1026 15:08:38.668949    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 15:08:38 pause-212674 kubelet[1293]: E1026 15:08:38.669036    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 26 15:08:38 pause-212674 kubelet[1293]: E1026 15:08:38.669124    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:38 pause-212674 kubelet[1293]: E1026 15:08:38.669145    1293 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:38 pause-212674 kubelet[1293]: E1026 15:08:38.669192    1293 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 15:08:42 pause-212674 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:08:42 pause-212674 kubelet[1293]: I1026 15:08:42.468516    1293 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 15:08:42 pause-212674 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:08:42 pause-212674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:08:42 pause-212674 systemd[1]: kubelet.service: Consumed 2.362s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-212674 -n pause-212674
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-212674 -n pause-212674: exit status 2 (344.366886ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-212674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-330914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-330914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.84647ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:11:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-330914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-330914 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-330914 describe deploy/metrics-server -n kube-system: exit status 1 (60.901896ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-330914 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-330914
helpers_test.go:243: (dbg) docker inspect old-k8s-version-330914:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe",
	        "Created": "2025-10-26T15:10:26.438664017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1074517,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:10:26.700852188Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/hosts",
	        "LogPath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe-json.log",
	        "Name": "/old-k8s-version-330914",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-330914:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-330914",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe",
	                "LowerDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-330914",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-330914/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-330914",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-330914",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-330914",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "87c29bcec7b7f2729ea383b1b1e8e6417a0e9276f4de110982c186362109c03a",
	            "SandboxKey": "/var/run/docker/netns/87c29bcec7b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-330914": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:56:eb:20:4d:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56ce3fb526f5012c2231b9293c9ece449bc551903b4972b11997763e4592ce3f",
	                    "EndpointID": "45c7609de7168957acb18bd0929b1ede540b0a832885f848323edb62ea312c91",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-330914",
	                        "91ae2e5aad34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330914 -n old-k8s-version-330914
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-330914 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-330914 logs -n 25: (1.124833021s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-498531 sudo containerd config dump                                                                                                                                                                                                  │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ ssh     │ -p cilium-498531 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ ssh     │ -p cilium-498531 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ ssh     │ -p cilium-498531 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ ssh     │ -p cilium-498531 sudo crio config                                                                                                                                                                                                             │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ delete  │ -p cilium-498531                                                                                                                                                                                                                              │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ start   │ -p cert-expiration-619245 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-619245    │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ start   │ -p NoKubernetes-917490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ start   │ -p force-systemd-flag-391593 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-391593 │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ delete  │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ start   │ -p NoKubernetes-917490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ ssh     │ -p NoKubernetes-917490 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ ssh     │ force-systemd-flag-391593 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-391593 │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ delete  │ -p force-systemd-flag-391593                                                                                                                                                                                                                  │ force-systemd-flag-391593 │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p cert-options-124833 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ stop    │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p NoKubernetes-917490 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ ssh     │ -p NoKubernetes-917490 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ delete  │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ ssh     │ cert-options-124833 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ ssh     │ -p cert-options-124833 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ delete  │ -p cert-options-124833                                                                                                                                                                                                                        │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-330914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:10:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:10:27.003560 1074625 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:10:27.003818 1074625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:10:27.003825 1074625 out.go:374] Setting ErrFile to fd 2...
	I1026 15:10:27.003829 1074625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:10:27.004048 1074625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:10:27.004549 1074625 out.go:368] Setting JSON to false
	I1026 15:10:27.005877 1074625 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10375,"bootTime":1761481052,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:10:27.005989 1074625 start.go:141] virtualization: kvm guest
	I1026 15:10:27.008185 1074625 out.go:179] * [no-preload-475081] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:10:27.010340 1074625 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:10:27.010376 1074625 notify.go:220] Checking for updates...
	I1026 15:10:27.014512 1074625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:10:27.015757 1074625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:10:27.017721 1074625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:10:27.019483 1074625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:10:27.020699 1074625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:10:27.022421 1074625 config.go:182] Loaded profile config "cert-expiration-619245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:10:27.022573 1074625 config.go:182] Loaded profile config "kubernetes-upgrade-176599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:10:27.022728 1074625 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:10:27.022869 1074625 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:10:27.050966 1074625 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:10:27.051052 1074625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:10:27.125497 1074625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:10:27.112696147 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:10:27.125606 1074625 docker.go:318] overlay module found
	I1026 15:10:27.128060 1074625 out.go:179] * Using the docker driver based on user configuration
	I1026 15:10:27.129256 1074625 start.go:305] selected driver: docker
	I1026 15:10:27.129276 1074625 start.go:925] validating driver "docker" against <nil>
	I1026 15:10:27.129293 1074625 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:10:27.130033 1074625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:10:27.214730 1074625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:10:27.202343666 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:10:27.214951 1074625 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:10:27.215216 1074625 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:10:27.217420 1074625 out.go:179] * Using Docker driver with root privileges
	I1026 15:10:27.218781 1074625 cni.go:84] Creating CNI manager for ""
	I1026 15:10:27.218883 1074625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:27.218896 1074625 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:10:27.219022 1074625 start.go:349] cluster config:
	{Name:no-preload-475081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:10:27.220680 1074625 out.go:179] * Starting "no-preload-475081" primary control-plane node in "no-preload-475081" cluster
	I1026 15:10:27.222124 1074625 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:10:27.223550 1074625 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:10:27.224785 1074625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:10:27.224907 1074625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:10:27.224980 1074625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/config.json ...
	I1026 15:10:27.225021 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/config.json: {Name:mk4b3cf580b49d6ad576694b31a852b8c72157a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:27.225130 1074625 cache.go:107] acquiring lock: {Name:mk937f429b3d3636ff8775b90e16c023489c7adf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225127 1074625 cache.go:107] acquiring lock: {Name:mk542564d39af87b00a1863120bb08cf008fe7c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225252 1074625 cache.go:107] acquiring lock: {Name:mk1536b2f10db5b203b98b8484729c964c7ca6e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225269 1074625 cache.go:115] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 15:10:27.225279 1074625 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 173.267µs
	I1026 15:10:27.225290 1074625 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 15:10:27.225290 1074625 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:27.225340 1074625 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:27.225331 1074625 cache.go:107] acquiring lock: {Name:mkc179cf4029d6736ce61dbfad39b348fc2c96b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225304 1074625 cache.go:107] acquiring lock: {Name:mk59c1a44c70bc7e7856311c44a1559489b29c53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225414 1074625 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:27.225466 1074625 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 15:10:27.225592 1074625 cache.go:107] acquiring lock: {Name:mk6b4452625dc58192fa1eb2696a2e362bd1db25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225616 1074625 cache.go:107] acquiring lock: {Name:mk4c631399a8aca700734c5e2f0c2f2d3de52916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225677 1074625 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:27.225691 1074625 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:27.225592 1074625 cache.go:107] acquiring lock: {Name:mkf66b984302bba364c4bdc743639502359ea174 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225954 1074625 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:27.227598 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:27.227915 1074625 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:27.228568 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:27.229213 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:27.229505 1074625 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:27.230336 1074625 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 15:10:27.230665 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:27.265294 1074625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:10:27.265319 1074625 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:10:27.265335 1074625 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:10:27.265367 1074625 start.go:360] acquireMachinesLock for no-preload-475081: {Name:mk9c0a34e6930824c553b7de78574fec03de3709 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.265470 1074625 start.go:364] duration metric: took 84.128µs to acquireMachinesLock for "no-preload-475081"
	I1026 15:10:27.265501 1074625 start.go:93] Provisioning new machine with config: &{Name:no-preload-475081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:10:27.265579 1074625 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:10:26.360896 1072816 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-330914:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.377121956s)
	I1026 15:10:26.360941 1072816 kic.go:203] duration metric: took 5.377327615s to extract preloaded images to volume ...
	W1026 15:10:26.361069 1072816 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:10:26.361100 1072816 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:10:26.361199 1072816 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:10:26.422361 1072816 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-330914 --name old-k8s-version-330914 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-330914 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-330914 --network old-k8s-version-330914 --ip 192.168.85.2 --volume old-k8s-version-330914:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:10:26.993923 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Running}}
	I1026 15:10:27.014269 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:27.038324 1072816 cli_runner.go:164] Run: docker exec old-k8s-version-330914 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:10:27.101250 1072816 oci.go:144] the created container "old-k8s-version-330914" has a running status.
	I1026 15:10:27.101302 1072816 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa...
	I1026 15:10:27.526543 1072816 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:10:27.558794 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:27.583723 1072816 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:10:27.583747 1072816 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-330914 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:10:27.645295 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:27.670009 1072816 machine.go:93] provisionDockerMachine start ...
	I1026 15:10:27.670125 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:27.699840 1072816 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:27.700217 1072816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:10:27.700235 1072816 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:10:27.865463 1072816 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330914
	
	I1026 15:10:27.865494 1072816 ubuntu.go:182] provisioning hostname "old-k8s-version-330914"
	I1026 15:10:27.865574 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:27.891354 1072816 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:27.891728 1072816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:10:27.891780 1072816 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-330914 && echo "old-k8s-version-330914" | sudo tee /etc/hostname
	I1026 15:10:28.066183 1072816 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330914
	
	I1026 15:10:28.066264 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:28.087012 1072816 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:28.087357 1072816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:10:28.087388 1072816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-330914' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-330914/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-330914' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:10:28.240673 1072816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:10:28.240705 1072816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:10:28.240729 1072816 ubuntu.go:190] setting up certificates
	I1026 15:10:28.240742 1072816 provision.go:84] configureAuth start
	I1026 15:10:28.240822 1072816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330914
	I1026 15:10:28.261303 1072816 provision.go:143] copyHostCerts
	I1026 15:10:28.261390 1072816 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:10:28.261408 1072816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:10:28.261502 1072816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:10:28.261641 1072816 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:10:28.261654 1072816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:10:28.261696 1072816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:10:28.261802 1072816 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:10:28.261822 1072816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:10:28.261854 1072816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:10:28.261927 1072816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-330914 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-330914]
	I1026 15:10:29.304631 1072816 provision.go:177] copyRemoteCerts
	I1026 15:10:29.304693 1072816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:10:29.304733 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:29.323888 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:29.426493 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:10:29.447261 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 15:10:29.466016 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:10:29.484108 1072816 provision.go:87] duration metric: took 1.243350184s to configureAuth
	I1026 15:10:29.484143 1072816 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:10:29.484345 1072816 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:10:29.484441 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:29.502386 1072816 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.502618 1072816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:10:29.502635 1072816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:10:29.771907 1072816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:10:29.771947 1072816 machine.go:96] duration metric: took 2.101915141s to provisionDockerMachine
	I1026 15:10:29.771960 1072816 client.go:171] duration metric: took 9.386031718s to LocalClient.Create
	I1026 15:10:29.771985 1072816 start.go:167] duration metric: took 9.38611743s to libmachine.API.Create "old-k8s-version-330914"
	I1026 15:10:29.771995 1072816 start.go:293] postStartSetup for "old-k8s-version-330914" (driver="docker")
	I1026 15:10:29.772014 1072816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:10:29.772082 1072816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:10:29.772136 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:29.793479 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:29.897024 1072816 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:10:29.900786 1072816 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:10:29.900822 1072816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:10:29.900835 1072816 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:10:29.900896 1072816 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:10:29.901002 1072816 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:10:29.901123 1072816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:10:29.909506 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:10:29.930476 1072816 start.go:296] duration metric: took 158.461324ms for postStartSetup
	I1026 15:10:29.930859 1072816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330914
	I1026 15:10:29.949760 1072816 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/config.json ...
	I1026 15:10:29.950092 1072816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:10:29.950153 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:29.968661 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:30.067271 1072816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:10:30.072081 1072816 start.go:128] duration metric: took 9.689671202s to createHost
	I1026 15:10:30.072108 1072816 start.go:83] releasing machines lock for "old-k8s-version-330914", held for 9.689845414s
	I1026 15:10:30.072193 1072816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330914
	I1026 15:10:30.090512 1072816 ssh_runner.go:195] Run: cat /version.json
	I1026 15:10:30.090559 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:30.090592 1072816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:10:30.090680 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:30.112513 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:30.112682 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:27.048230 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:27.048667 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:27.048729 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:27.048799 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:27.087066 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:27.087109 1030092 cri.go:89] found id: ""
	I1026 15:10:27.087118 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:27.087221 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:27.093044 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:27.093117 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:27.130720 1030092 cri.go:89] found id: ""
	I1026 15:10:27.130749 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.130758 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:27.130767 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:27.130821 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:27.178192 1030092 cri.go:89] found id: ""
	I1026 15:10:27.178223 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.178236 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:27.178259 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:27.178320 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:27.214200 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:27.214224 1030092 cri.go:89] found id: ""
	I1026 15:10:27.214234 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:27.214294 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:27.218845 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:27.218925 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:27.264658 1030092 cri.go:89] found id: ""
	I1026 15:10:27.264774 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.264817 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:27.264839 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:27.264932 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:27.302943 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:27.302970 1030092 cri.go:89] found id: ""
	I1026 15:10:27.302981 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:27.303047 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:27.308381 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:27.308459 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:27.348596 1030092 cri.go:89] found id: ""
	I1026 15:10:27.348628 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.348640 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:27.348648 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:27.348714 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:27.390272 1030092 cri.go:89] found id: ""
	I1026 15:10:27.390309 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.390322 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:27.390336 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:27.390353 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:27.516762 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:27.516810 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:27.539323 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:27.539397 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:27.624275 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:27.624301 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:27.624332 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:27.669436 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:27.669486 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:27.746321 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:27.746374 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:27.786108 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:27.786147 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:27.852417 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:27.852454 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:30.399246 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:30.399728 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:30.399804 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:30.399866 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:30.433274 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:30.433299 1030092 cri.go:89] found id: ""
	I1026 15:10:30.433309 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:30.433371 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:30.437616 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:30.437692 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:30.474672 1030092 cri.go:89] found id: ""
	I1026 15:10:30.474702 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.474714 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:30.474722 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:30.474785 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:30.504326 1030092 cri.go:89] found id: ""
	I1026 15:10:30.504355 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.504365 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:30.504372 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:30.504431 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:30.533893 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:30.533915 1030092 cri.go:89] found id: ""
	I1026 15:10:30.533925 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:30.533990 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:30.538178 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:30.538245 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:30.274422 1072816 ssh_runner.go:195] Run: systemctl --version
	I1026 15:10:30.281613 1072816 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:10:30.317620 1072816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:10:30.322634 1072816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:10:30.322707 1072816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:10:30.349963 1072816 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:10:30.349989 1072816 start.go:495] detecting cgroup driver to use...
	I1026 15:10:30.350027 1072816 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:10:30.350082 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:10:30.368616 1072816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:10:30.381569 1072816 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:10:30.381641 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:10:30.400357 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:10:30.424175 1072816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:10:30.519660 1072816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:10:30.635087 1072816 docker.go:234] disabling docker service ...
	I1026 15:10:30.635244 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:10:30.657718 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:10:30.672474 1072816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:10:30.773152 1072816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:10:30.876905 1072816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:10:30.890212 1072816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:10:30.906513 1072816 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 15:10:30.906597 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.917933 1072816 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:10:30.918007 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.928494 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.939454 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.949573 1072816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:10:30.963304 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.974847 1072816 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.989632 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.001245 1072816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:10:31.010372 1072816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:10:31.018448 1072816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:31.115050 1072816 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:10:31.234496 1072816 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:10:31.234569 1072816 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:10:31.239037 1072816 start.go:563] Will wait 60s for crictl version
	I1026 15:10:31.239106 1072816 ssh_runner.go:195] Run: which crictl
	I1026 15:10:31.242887 1072816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:10:31.271736 1072816 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:10:31.271827 1072816 ssh_runner.go:195] Run: crio --version
	I1026 15:10:31.301698 1072816 ssh_runner.go:195] Run: crio --version
	I1026 15:10:31.339075 1072816 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1026 15:10:27.275444 1074625 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:10:27.275935 1074625 start.go:159] libmachine.API.Create for "no-preload-475081" (driver="docker")
	I1026 15:10:27.275971 1074625 client.go:168] LocalClient.Create starting
	I1026 15:10:27.276058 1074625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:10:27.276109 1074625 main.go:141] libmachine: Decoding PEM data...
	I1026 15:10:27.276128 1074625 main.go:141] libmachine: Parsing certificate...
	I1026 15:10:27.276437 1074625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:10:27.276502 1074625 main.go:141] libmachine: Decoding PEM data...
	I1026 15:10:27.276525 1074625 main.go:141] libmachine: Parsing certificate...
	I1026 15:10:27.277147 1074625 cli_runner.go:164] Run: docker network inspect no-preload-475081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:10:27.299628 1074625 cli_runner.go:211] docker network inspect no-preload-475081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:10:27.299706 1074625 network_create.go:284] running [docker network inspect no-preload-475081] to gather additional debugging logs...
	I1026 15:10:27.299726 1074625 cli_runner.go:164] Run: docker network inspect no-preload-475081
	W1026 15:10:27.322313 1074625 cli_runner.go:211] docker network inspect no-preload-475081 returned with exit code 1
	I1026 15:10:27.322351 1074625 network_create.go:287] error running [docker network inspect no-preload-475081]: docker network inspect no-preload-475081: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-475081 not found
	I1026 15:10:27.322367 1074625 network_create.go:289] output of [docker network inspect no-preload-475081]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-475081 not found
	
	** /stderr **
	I1026 15:10:27.322497 1074625 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:10:27.356838 1074625 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:10:27.357897 1074625 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:10:27.358843 1074625 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:10:27.360770 1074625 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d6289da05fd0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:46:02:30:47:06} reservation:<nil>}
	I1026 15:10:27.361640 1074625 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-56ce3fb526f5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:2e:3a:ce:5d:57:e6} reservation:<nil>}
	I1026 15:10:27.362139 1074625 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-d4e229d938e3 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:7e:a1:7f:77:f5:d3} reservation:<nil>}
	I1026 15:10:27.363023 1074625 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e03b50}
	I1026 15:10:27.363058 1074625 network_create.go:124] attempt to create docker network no-preload-475081 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1026 15:10:27.363129 1074625 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-475081 no-preload-475081
	I1026 15:10:27.393001 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1026 15:10:27.393909 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 15:10:27.399998 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 15:10:27.403806 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 15:10:27.420497 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1026 15:10:27.436085 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 15:10:27.453460 1074625 network_create.go:108] docker network no-preload-475081 192.168.103.0/24 created
	I1026 15:10:27.453495 1074625 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-475081" container
	I1026 15:10:27.453693 1074625 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:10:27.467498 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 15:10:27.477472 1074625 cli_runner.go:164] Run: docker volume create no-preload-475081 --label name.minikube.sigs.k8s.io=no-preload-475081 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:10:27.500283 1074625 oci.go:103] Successfully created a docker volume no-preload-475081
	I1026 15:10:27.500373 1074625 cli_runner.go:164] Run: docker run --rm --name no-preload-475081-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-475081 --entrypoint /usr/bin/test -v no-preload-475081:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:10:27.512623 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1026 15:10:27.512655 1074625 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 287.350752ms
	I1026 15:10:27.512672 1074625 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 15:10:27.783597 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 15:10:27.783629 1074625 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 558.379389ms
	I1026 15:10:27.783646 1074625 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 15:10:27.988793 1074625 oci.go:107] Successfully prepared a docker volume no-preload-475081
	I1026 15:10:27.988828 1074625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1026 15:10:27.988938 1074625 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:10:27.989006 1074625 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:10:27.989068 1074625 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:10:28.048969 1074625 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-475081 --name no-preload-475081 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-475081 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-475081 --network no-preload-475081 --ip 192.168.103.2 --volume no-preload-475081:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:10:28.342019 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Running}}
	I1026 15:10:28.363454 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:10:28.386659 1074625 cli_runner.go:164] Run: docker exec no-preload-475081 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:10:28.437636 1074625 oci.go:144] the created container "no-preload-475081" has a running status.
	I1026 15:10:28.437669 1074625 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa...
	I1026 15:10:28.812762 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 15:10:28.812803 1074625 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.58721495s
	I1026 15:10:28.812820 1074625 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 15:10:28.818076 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 15:10:28.818116 1074625 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.592498947s
	I1026 15:10:28.818138 1074625 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 15:10:28.868837 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 15:10:28.868879 1074625 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.643288598s
	I1026 15:10:28.868897 1074625 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 15:10:29.042618 1074625 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:10:29.082504 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:10:29.100432 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 15:10:29.100466 1074625 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.875358453s
	I1026 15:10:29.100483 1074625 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 15:10:29.106955 1074625 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:10:29.106979 1074625 kic_runner.go:114] Args: [docker exec --privileged no-preload-475081 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:10:29.166186 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:10:29.180960 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 15:10:29.180994 1074625 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 1.95566247s
	I1026 15:10:29.181009 1074625 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 15:10:29.181037 1074625 cache.go:87] Successfully saved all images to host disk.
	I1026 15:10:29.184950 1074625 machine.go:93] provisionDockerMachine start ...
	I1026 15:10:29.185040 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.206028 1074625 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.206346 1074625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:10:29.206364 1074625 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:10:29.352402 1074625 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-475081
	
	I1026 15:10:29.352435 1074625 ubuntu.go:182] provisioning hostname "no-preload-475081"
	I1026 15:10:29.352503 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.372402 1074625 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.372625 1074625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:10:29.372638 1074625 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-475081 && echo "no-preload-475081" | sudo tee /etc/hostname
	I1026 15:10:29.525268 1074625 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-475081
	
	I1026 15:10:29.525363 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.544593 1074625 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.544859 1074625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:10:29.544879 1074625 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-475081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-475081/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-475081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:10:29.690878 1074625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:10:29.690932 1074625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:10:29.690964 1074625 ubuntu.go:190] setting up certificates
	I1026 15:10:29.690982 1074625 provision.go:84] configureAuth start
	I1026 15:10:29.691077 1074625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-475081
	I1026 15:10:29.712324 1074625 provision.go:143] copyHostCerts
	I1026 15:10:29.712398 1074625 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:10:29.712414 1074625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:10:29.712503 1074625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:10:29.712644 1074625 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:10:29.712656 1074625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:10:29.712693 1074625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:10:29.712856 1074625 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:10:29.712872 1074625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:10:29.712949 1074625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:10:29.713067 1074625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.no-preload-475081 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-475081]
	I1026 15:10:29.762969 1074625 provision.go:177] copyRemoteCerts
	I1026 15:10:29.763031 1074625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:10:29.763072 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.784546 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:29.887382 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:10:29.907584 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:10:29.926465 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:10:29.945955 1074625 provision.go:87] duration metric: took 254.952545ms to configureAuth
	I1026 15:10:29.945997 1074625 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:10:29.946231 1074625 config.go:182] Loaded profile config "no-preload-475081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:10:29.946343 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.966380 1074625 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.966651 1074625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:10:29.966676 1074625 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:10:30.226858 1074625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:10:30.226886 1074625 machine.go:96] duration metric: took 1.04191595s to provisionDockerMachine
	I1026 15:10:30.226899 1074625 client.go:171] duration metric: took 2.950920771s to LocalClient.Create
	I1026 15:10:30.226926 1074625 start.go:167] duration metric: took 2.95099448s to libmachine.API.Create "no-preload-475081"
	I1026 15:10:30.226940 1074625 start.go:293] postStartSetup for "no-preload-475081" (driver="docker")
	I1026 15:10:30.226958 1074625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:10:30.227033 1074625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:10:30.227091 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:30.246866 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:30.350765 1074625 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:10:30.354675 1074625 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:10:30.354710 1074625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:10:30.354722 1074625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:10:30.354796 1074625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:10:30.354937 1074625 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:10:30.355071 1074625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:10:30.364207 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:10:30.386215 1074625 start.go:296] duration metric: took 159.253153ms for postStartSetup
	I1026 15:10:30.386671 1074625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-475081
	I1026 15:10:30.406484 1074625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/config.json ...
	I1026 15:10:30.406800 1074625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:10:30.406913 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:30.427632 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:30.529101 1074625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:10:30.534994 1074625 start.go:128] duration metric: took 3.269395515s to createHost
	I1026 15:10:30.535028 1074625 start.go:83] releasing machines lock for "no-preload-475081", held for 3.269540492s
	I1026 15:10:30.535113 1074625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-475081
	I1026 15:10:30.555083 1074625 ssh_runner.go:195] Run: cat /version.json
	I1026 15:10:30.555106 1074625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:10:30.555144 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:30.555213 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:30.582552 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:30.584941 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:30.762921 1074625 ssh_runner.go:195] Run: systemctl --version
	I1026 15:10:30.770192 1074625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:10:30.818289 1074625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:10:30.823514 1074625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:10:30.823596 1074625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:10:30.854613 1074625 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:10:30.854645 1074625 start.go:495] detecting cgroup driver to use...
	I1026 15:10:30.854686 1074625 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:10:30.854761 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:10:30.874750 1074625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:10:30.888099 1074625 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:10:30.888186 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:10:30.907974 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:10:30.928050 1074625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:10:31.027762 1074625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:10:31.132139 1074625 docker.go:234] disabling docker service ...
	I1026 15:10:31.132226 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:10:31.161293 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:10:31.176331 1074625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:10:31.276297 1074625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:10:31.370118 1074625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:10:31.383240 1074625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:10:31.399560 1074625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:10:31.399635 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.412319 1074625 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:10:31.412377 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.421601 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.432352 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.441909 1074625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:10:31.450411 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.460586 1074625 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.475668 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.485882 1074625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:10:31.494128 1074625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:10:31.502474 1074625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:31.591232 1074625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:10:31.719792 1074625 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:10:31.719873 1074625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:10:31.724366 1074625 start.go:563] Will wait 60s for crictl version
	I1026 15:10:31.724440 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:31.729831 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:10:31.758490 1074625 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:10:31.758580 1074625 ssh_runner.go:195] Run: crio --version
	I1026 15:10:31.788001 1074625 ssh_runner.go:195] Run: crio --version
	I1026 15:10:31.823230 1074625 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:10:31.824457 1074625 cli_runner.go:164] Run: docker network inspect no-preload-475081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:10:31.843910 1074625 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1026 15:10:31.848229 1074625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:10:31.859047 1074625 kubeadm.go:883] updating cluster {Name:no-preload-475081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:10:31.859184 1074625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:10:31.859233 1074625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:10:31.887848 1074625 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:10:31.887880 1074625 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 15:10:31.887972 1074625 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:31.888028 1074625 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 15:10:31.888030 1074625 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:31.888047 1074625 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:31.888001 1074625 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:31.888072 1074625 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:31.888031 1074625 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:31.888069 1074625 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:31.889481 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:31.889483 1074625 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 15:10:31.889483 1074625 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:31.889484 1074625 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:31.889574 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:31.889493 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:31.889537 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:31.889593 1074625 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:31.340253 1072816 cli_runner.go:164] Run: docker network inspect old-k8s-version-330914 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:10:31.359222 1072816 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:10:31.363637 1072816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:10:31.374870 1072816 kubeadm.go:883] updating cluster {Name:old-k8s-version-330914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-330914 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:10:31.375048 1072816 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 15:10:31.375125 1072816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:10:31.408337 1072816 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:10:31.408360 1072816 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:10:31.408404 1072816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:10:31.436796 1072816 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:10:31.436820 1072816 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:10:31.436828 1072816 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1026 15:10:31.436927 1072816 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-330914 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-330914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:10:31.437008 1072816 ssh_runner.go:195] Run: crio config
	I1026 15:10:31.486402 1072816 cni.go:84] Creating CNI manager for ""
	I1026 15:10:31.486426 1072816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:31.486454 1072816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:10:31.486488 1072816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-330914 NodeName:old-k8s-version-330914 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:10:31.486651 1072816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-330914"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:10:31.486727 1072816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1026 15:10:31.495375 1072816 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:10:31.495444 1072816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:10:31.503950 1072816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 15:10:31.517866 1072816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:10:31.540318 1072816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1026 15:10:31.554519 1072816 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:10:31.558593 1072816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:10:31.570183 1072816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:31.655010 1072816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:10:31.683056 1072816 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914 for IP: 192.168.85.2
	I1026 15:10:31.683080 1072816 certs.go:195] generating shared ca certs ...
	I1026 15:10:31.683101 1072816 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:31.683294 1072816 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:10:31.683368 1072816 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:10:31.683385 1072816 certs.go:257] generating profile certs ...
	I1026 15:10:31.683461 1072816 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.key
	I1026 15:10:31.683482 1072816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt with IP's: []
	I1026 15:10:32.002037 1072816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt ...
	I1026 15:10:32.002073 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt: {Name:mk9eb27b0acc738f8e51fd36dfa2356afc000f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.002303 1072816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.key ...
	I1026 15:10:32.002335 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.key: {Name:mkc7b6d36bb3c2ef946755912241f1454e702242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.002470 1072816 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key.925d69a5
	I1026 15:10:32.002495 1072816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt.925d69a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 15:10:32.225436 1072816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt.925d69a5 ...
	I1026 15:10:32.225464 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt.925d69a5: {Name:mk6a24e6a1f89a2f77ebed52ff44c979d4a184bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.225660 1072816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key.925d69a5 ...
	I1026 15:10:32.225681 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key.925d69a5: {Name:mk3be0046a1baa63d42cce5d152c095adbce996a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.225771 1072816 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt.925d69a5 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt
	I1026 15:10:32.225885 1072816 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key.925d69a5 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key
	I1026 15:10:32.225999 1072816 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.key
	I1026 15:10:32.226027 1072816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.crt with IP's: []
	I1026 15:10:32.784640 1072816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.crt ...
	I1026 15:10:32.784675 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.crt: {Name:mk05a5cef04b9cf172f58ba474c236de8669cc02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.784880 1072816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.key ...
	I1026 15:10:32.784899 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.key: {Name:mk314b1116f184c184a6c31bbb87cdd6071d4a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.785132 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:10:32.785204 1072816 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:10:32.785219 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:10:32.785253 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:10:32.785288 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:10:32.785321 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:10:32.785384 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:10:32.786060 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:10:32.807132 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:10:32.827766 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:10:32.853383 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:10:32.879599 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 15:10:32.905229 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:10:32.932852 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:10:32.959535 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:10:32.984333 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:10:33.008383 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:10:33.027719 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:10:33.046154 1072816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:10:33.059296 1072816 ssh_runner.go:195] Run: openssl version
	I1026 15:10:33.065869 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:10:33.075535 1072816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:10:33.079710 1072816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:10:33.079768 1072816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:10:33.115704 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:10:33.125301 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:10:33.134687 1072816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:10:33.138760 1072816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:10:33.138838 1072816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:10:33.176264 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:10:33.185687 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:10:33.195157 1072816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:33.199515 1072816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:33.199587 1072816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:33.235934 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:10:33.245729 1072816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:10:33.249909 1072816 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:10:33.249965 1072816 kubeadm.go:400] StartCluster: {Name:old-k8s-version-330914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-330914 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:10:33.250054 1072816 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:10:33.250117 1072816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:10:33.279654 1072816 cri.go:89] found id: ""
	I1026 15:10:33.279722 1072816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:10:33.288544 1072816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:10:33.297266 1072816 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:10:33.297331 1072816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:10:33.305971 1072816 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:10:33.305994 1072816 kubeadm.go:157] found existing configuration files:
	
	I1026 15:10:33.306044 1072816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:10:33.314347 1072816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:10:33.314413 1072816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:10:33.322584 1072816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:10:33.332682 1072816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:10:33.332752 1072816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:10:33.342655 1072816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:10:33.353198 1072816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:10:33.353277 1072816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:10:33.362809 1072816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:10:33.372545 1072816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:10:33.372607 1072816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:10:33.381111 1072816 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:10:33.431402 1072816 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1026 15:10:33.431492 1072816 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:10:33.482985 1072816 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:10:33.483075 1072816 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:10:33.483134 1072816 kubeadm.go:318] OS: Linux
	I1026 15:10:33.483223 1072816 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:10:33.483287 1072816 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:10:33.483344 1072816 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:10:33.483414 1072816 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:10:33.483480 1072816 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:10:33.483582 1072816 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:10:33.483666 1072816 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:10:33.483734 1072816 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:10:33.574752 1072816 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:10:33.574915 1072816 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:10:33.575068 1072816 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 15:10:33.766004 1072816 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:10:33.770323 1072816 out.go:252]   - Generating certificates and keys ...
	I1026 15:10:33.770439 1072816 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:10:33.770551 1072816 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:10:33.982608 1072816 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:10:34.135530 1072816 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:10:34.213988 1072816 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:10:34.433281 1072816 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:10:34.515682 1072816 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:10:34.515835 1072816 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-330914] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:10:34.675628 1072816 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:10:34.675835 1072816 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-330914] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:10:30.577561 1030092 cri.go:89] found id: ""
	I1026 15:10:30.577593 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.577613 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:30.577622 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:30.577685 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:30.613352 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:30.613381 1030092 cri.go:89] found id: ""
	I1026 15:10:30.613391 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:30.613449 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:30.617851 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:30.617925 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:30.649422 1030092 cri.go:89] found id: ""
	I1026 15:10:30.649459 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.649471 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:30.649480 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:30.649542 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:30.680591 1030092 cri.go:89] found id: ""
	I1026 15:10:30.680623 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.680633 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:30.680646 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:30.680663 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:30.781047 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:30.781085 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:30.800059 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:30.800092 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:30.872357 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:30.872384 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:30.872402 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:30.910824 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:30.910852 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:30.973301 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:30.973338 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:31.003970 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:31.003998 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:31.079300 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:31.079343 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:33.613645 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:33.614197 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:33.614277 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:33.614343 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:33.652052 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:33.652083 1030092 cri.go:89] found id: ""
	I1026 15:10:33.652093 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:33.652189 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:33.657696 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:33.657779 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:33.695307 1030092 cri.go:89] found id: ""
	I1026 15:10:33.695339 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.695350 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:33.695358 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:33.695424 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:33.731199 1030092 cri.go:89] found id: ""
	I1026 15:10:33.731230 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.731241 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:33.731249 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:33.731311 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:33.764354 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:33.764382 1030092 cri.go:89] found id: ""
	I1026 15:10:33.764393 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:33.764455 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:33.770779 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:33.770849 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:33.809746 1030092 cri.go:89] found id: ""
	I1026 15:10:33.809778 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.809787 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:33.809793 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:33.809856 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:33.847832 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:33.847857 1030092 cri.go:89] found id: ""
	I1026 15:10:33.847869 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:33.847925 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:33.853185 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:33.853259 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:33.888338 1030092 cri.go:89] found id: ""
	I1026 15:10:33.888369 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.888388 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:33.888396 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:33.888459 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:33.924904 1030092 cri.go:89] found id: ""
	I1026 15:10:33.924937 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.924948 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:33.924959 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:33.924974 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:33.984061 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:33.984105 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:34.015141 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:34.015193 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:34.086409 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:34.086458 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:34.124652 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:34.124684 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:34.225823 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:34.225866 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:34.243681 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:34.243743 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:34.318018 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:34.318050 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:34.318068 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:35.202607 1072816 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:10:35.453616 1072816 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:10:35.831680 1072816 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:10:35.831797 1072816 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:10:35.949112 1072816 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:10:36.147523 1072816 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:10:36.354875 1072816 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:10:36.624459 1072816 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:10:36.625121 1072816 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:10:36.629821 1072816 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:10:32.018681 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.020893 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.033545 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.036389 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.041956 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.059733 1074625 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1026 15:10:32.059802 1074625 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.059857 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.061896 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.066062 1074625 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1026 15:10:32.066119 1074625 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.066182 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.068105 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1026 15:10:32.081045 1074625 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1026 15:10:32.081099 1074625 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.081174 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.084339 1074625 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1026 15:10:32.084388 1074625 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.084441 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.095381 1074625 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1026 15:10:32.095426 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.095437 1074625 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.095486 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.107843 1074625 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1026 15:10:32.107896 1074625 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.107940 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.107939 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.110203 1074625 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1026 15:10:32.110245 1074625 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1026 15:10:32.110261 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.110289 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.110292 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.129103 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.129151 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.129111 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.145406 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.145550 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.146306 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.146421 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:10:32.166820 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.175924 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.176028 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.185649 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.186033 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.187811 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:10:32.190574 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.211363 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 15:10:32.211467 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:10:32.217831 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.220691 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.225362 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 15:10:32.225750 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:10:32.230699 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1026 15:10:32.230813 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:10:32.238347 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:10:32.238485 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 15:10:32.238525 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1026 15:10:32.238567 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1026 15:10:32.238595 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:10:32.274350 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1026 15:10:32.274383 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1026 15:10:32.274415 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 15:10:32.274427 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1026 15:10:32.274438 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 15:10:32.274451 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1026 15:10:32.274498 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:10:32.274523 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:10:32.296848 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1026 15:10:32.296907 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1026 15:10:32.296949 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1026 15:10:32.296960 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1026 15:10:32.344105 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1026 15:10:32.344106 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1026 15:10:32.344148 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1026 15:10:32.344177 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1026 15:10:32.345574 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1026 15:10:32.345602 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1026 15:10:32.403676 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:32.440894 1074625 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1026 15:10:32.440967 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1026 15:10:32.497117 1074625 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1026 15:10:32.497206 1074625 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:32.497279 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.922489 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:32.922641 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1026 15:10:32.922674 1074625 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:10:32.922726 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:10:32.964817 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:34.143632 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.220874662s)
	I1026 15:10:34.143671 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1026 15:10:34.143699 1074625 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:10:34.143695 1074625 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.17883613s)
	I1026 15:10:34.143757 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:34.143761 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:10:34.177240 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1026 15:10:34.177364 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:10:35.486100 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.342311581s)
	I1026 15:10:35.486137 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1026 15:10:35.486173 1074625 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.308769695s)
	I1026 15:10:35.486209 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1026 15:10:35.486233 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1026 15:10:35.486179 1074625 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:10:35.486305 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:10:36.659635 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.173287598s)
	I1026 15:10:36.659671 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1026 15:10:36.659699 1074625 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:10:36.659753 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:10:36.631017 1072816 out.go:252]   - Booting up control plane ...
	I1026 15:10:36.631125 1072816 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:10:36.631260 1072816 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:10:36.632077 1072816 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:10:36.647824 1072816 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:10:36.649095 1072816 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:10:36.649203 1072816 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:10:36.763431 1072816 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 15:10:36.857206 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:36.857668 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:36.857739 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:36.857813 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:36.891242 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:36.891271 1030092 cri.go:89] found id: ""
	I1026 15:10:36.891283 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:36.891346 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:36.895675 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:36.895744 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:36.925613 1030092 cri.go:89] found id: ""
	I1026 15:10:36.925645 1030092 logs.go:282] 0 containers: []
	W1026 15:10:36.925656 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:36.925664 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:36.925736 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:36.955042 1030092 cri.go:89] found id: ""
	I1026 15:10:36.955070 1030092 logs.go:282] 0 containers: []
	W1026 15:10:36.955081 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:36.955088 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:36.955154 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:36.986661 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:36.986687 1030092 cri.go:89] found id: ""
	I1026 15:10:36.986697 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:36.986761 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:36.990955 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:36.991029 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:37.021347 1030092 cri.go:89] found id: ""
	I1026 15:10:37.021375 1030092 logs.go:282] 0 containers: []
	W1026 15:10:37.021386 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:37.021394 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:37.021456 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:37.053092 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:37.053117 1030092 cri.go:89] found id: ""
	I1026 15:10:37.053128 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:37.053228 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:37.057878 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:37.057959 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:37.087829 1030092 cri.go:89] found id: ""
	I1026 15:10:37.087861 1030092 logs.go:282] 0 containers: []
	W1026 15:10:37.087873 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:37.087881 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:37.087938 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:37.118065 1030092 cri.go:89] found id: ""
	I1026 15:10:37.118091 1030092 logs.go:282] 0 containers: []
	W1026 15:10:37.118100 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:37.118110 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:37.118125 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:37.147916 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:37.147949 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:37.210610 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:37.210652 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:37.243472 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:37.243504 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:37.335697 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:37.335736 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:37.352960 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:37.352997 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:37.418139 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:37.418180 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:37.418199 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:37.452738 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:37.452781 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:40.018225 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:40.018690 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:40.018760 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:40.018827 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:40.050881 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:40.050908 1030092 cri.go:89] found id: ""
	I1026 15:10:40.050918 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:40.050978 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:40.055572 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:40.055647 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:40.088582 1030092 cri.go:89] found id: ""
	I1026 15:10:40.088621 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.088632 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:40.088641 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:40.088702 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:40.120034 1030092 cri.go:89] found id: ""
	I1026 15:10:40.120066 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.120076 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:40.120085 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:40.120149 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:40.151346 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:40.151376 1030092 cri.go:89] found id: ""
	I1026 15:10:40.151387 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:40.151453 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:40.159276 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:40.159356 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:40.190963 1030092 cri.go:89] found id: ""
	I1026 15:10:40.190993 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.191004 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:40.191012 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:40.191070 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:40.222082 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:40.222109 1030092 cri.go:89] found id: ""
	I1026 15:10:40.222119 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:40.222204 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:40.226905 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:40.226967 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:40.260965 1030092 cri.go:89] found id: ""
	I1026 15:10:40.260999 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.261010 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:40.261025 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:40.261100 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:40.297012 1030092 cri.go:89] found id: ""
	I1026 15:10:40.297039 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.297050 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:40.297062 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:40.297079 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:40.313501 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:40.313533 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:40.373572 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:40.373596 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:40.373612 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:40.417664 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:40.417712 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:40.483616 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:40.483661 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:40.525305 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:40.525341 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:38.520024 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.860236493s)
	I1026 15:10:38.520061 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1026 15:10:38.520093 1074625 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:10:38.520148 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:10:39.674142 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.153961295s)
	I1026 15:10:39.674192 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1026 15:10:39.674228 1074625 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:10:39.674288 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:10:42.266629 1072816 kubeadm.go:318] [apiclient] All control plane components are healthy after 5.502468 seconds
	I1026 15:10:42.266804 1072816 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:10:42.280014 1072816 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:10:42.970456 1072816 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:10:42.970768 1072816 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-330914 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:10:43.481696 1072816 kubeadm.go:318] [bootstrap-token] Using token: xh3wal.dc3bxz92s5jgqwbr
	I1026 15:10:43.483250 1072816 out.go:252]   - Configuring RBAC rules ...
	I1026 15:10:43.483439 1072816 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:10:43.488482 1072816 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:10:43.497415 1072816 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:10:43.501420 1072816 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:10:43.506734 1072816 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:10:43.512791 1072816 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:10:43.525458 1072816 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:10:43.754807 1072816 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:10:43.893980 1072816 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:10:43.895266 1072816 kubeadm.go:318] 
	I1026 15:10:43.895362 1072816 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:10:43.895372 1072816 kubeadm.go:318] 
	I1026 15:10:43.895468 1072816 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:10:43.895477 1072816 kubeadm.go:318] 
	I1026 15:10:43.895526 1072816 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:10:43.895598 1072816 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:10:43.895666 1072816 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:10:43.895682 1072816 kubeadm.go:318] 
	I1026 15:10:43.895764 1072816 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:10:43.895773 1072816 kubeadm.go:318] 
	I1026 15:10:43.895836 1072816 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:10:43.895844 1072816 kubeadm.go:318] 
	I1026 15:10:43.895912 1072816 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:10:43.896005 1072816 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:10:43.896116 1072816 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:10:43.896138 1072816 kubeadm.go:318] 
	I1026 15:10:43.896270 1072816 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:10:43.896371 1072816 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:10:43.896381 1072816 kubeadm.go:318] 
	I1026 15:10:43.896485 1072816 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token xh3wal.dc3bxz92s5jgqwbr \
	I1026 15:10:43.896627 1072816 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 15:10:43.896670 1072816 kubeadm.go:318] 	--control-plane 
	I1026 15:10:43.896684 1072816 kubeadm.go:318] 
	I1026 15:10:43.896796 1072816 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:10:43.896806 1072816 kubeadm.go:318] 
	I1026 15:10:43.896900 1072816 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token xh3wal.dc3bxz92s5jgqwbr \
	I1026 15:10:43.897078 1072816 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 15:10:43.899323 1072816 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 15:10:43.899495 1072816 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:10:43.899550 1072816 cni.go:84] Creating CNI manager for ""
	I1026 15:10:43.899563 1072816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:43.902312 1072816 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:10:43.903756 1072816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:10:43.909822 1072816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1026 15:10:43.909846 1072816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:10:43.928429 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:10:44.768678 1072816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:10:44.768751 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:44.768787 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-330914 minikube.k8s.io/updated_at=2025_10_26T15_10_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=old-k8s-version-330914 minikube.k8s.io/primary=true
	I1026 15:10:44.860373 1072816 ops.go:34] apiserver oom_adj: -16
	I1026 15:10:44.860606 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:40.593499 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:40.593540 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:40.633591 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:40.633624 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:43.273535 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:43.274231 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:43.274300 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:43.274362 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:43.305003 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:43.305033 1030092 cri.go:89] found id: ""
	I1026 15:10:43.305045 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:43.305121 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:43.309961 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:43.310038 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:43.345422 1030092 cri.go:89] found id: ""
	I1026 15:10:43.345451 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.345461 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:43.345469 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:43.345548 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:43.382669 1030092 cri.go:89] found id: ""
	I1026 15:10:43.382702 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.382714 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:43.382722 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:43.382861 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:43.415044 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:43.415065 1030092 cri.go:89] found id: ""
	I1026 15:10:43.415075 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:43.415132 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:43.419503 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:43.419575 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:43.450575 1030092 cri.go:89] found id: ""
	I1026 15:10:43.450600 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.450608 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:43.450614 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:43.450662 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:43.482543 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:43.482566 1030092 cri.go:89] found id: ""
	I1026 15:10:43.482577 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:43.482630 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:43.488081 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:43.488205 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:43.525646 1030092 cri.go:89] found id: ""
	I1026 15:10:43.525672 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.525684 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:43.525692 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:43.525763 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:43.558403 1030092 cri.go:89] found id: ""
	I1026 15:10:43.558432 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.558443 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:43.558456 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:43.558475 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:43.599649 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:43.599685 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:43.658260 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:43.658304 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:43.693382 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:43.693422 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:43.791323 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:43.791397 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:43.848922 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:43.848969 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:43.972494 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:43.972560 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:43.991033 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:43.991072 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 15:10:43.448396 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.774078503s)
	I1026 15:10:43.448432 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1026 15:10:43.448461 1074625 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:10:43.448507 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:10:44.149054 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1026 15:10:44.149103 1074625 cache_images.go:124] Successfully loaded all cached images
	I1026 15:10:44.149112 1074625 cache_images.go:93] duration metric: took 12.261214832s to LoadCachedImages
	I1026 15:10:44.149128 1074625 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1026 15:10:44.149251 1074625 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-475081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:10:44.149354 1074625 ssh_runner.go:195] Run: crio config
	I1026 15:10:44.203502 1074625 cni.go:84] Creating CNI manager for ""
	I1026 15:10:44.203533 1074625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:44.203995 1074625 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:10:44.204048 1074625 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-475081 NodeName:no-preload-475081 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:10:44.204213 1074625 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-475081"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:10:44.204277 1074625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:10:44.213593 1074625 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1026 15:10:44.213657 1074625 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1026 15:10:44.222314 1074625 binary.go:78] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1026 15:10:44.222370 1074625 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubelet
	I1026 15:10:44.222409 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1026 15:10:44.222422 1074625 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubeadm
	I1026 15:10:44.227450 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1026 15:10:44.227484 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1026 15:10:44.928824 1074625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:10:44.946284 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1026 15:10:44.950947 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1026 15:10:44.950983 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1026 15:10:45.116204 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1026 15:10:45.120930 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1026 15:10:45.120963 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1026 15:10:45.300674 1074625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:10:45.309407 1074625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:10:45.323054 1074625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:10:45.338714 1074625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1026 15:10:45.352305 1074625 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:10:45.356854 1074625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:10:45.368308 1074625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:45.464417 1074625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:10:45.492027 1074625 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081 for IP: 192.168.103.2
	I1026 15:10:45.492050 1074625 certs.go:195] generating shared ca certs ...
	I1026 15:10:45.492072 1074625 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.492245 1074625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:10:45.492304 1074625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:10:45.492319 1074625 certs.go:257] generating profile certs ...
	I1026 15:10:45.492384 1074625 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.key
	I1026 15:10:45.492401 1074625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt with IP's: []
	I1026 15:10:45.573728 1074625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt ...
	I1026 15:10:45.573764 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: {Name:mk1c68b47d96bf0fa064d0c385a591ce7192cb40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.573986 1074625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.key ...
	I1026 15:10:45.574005 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.key: {Name:mk8ff9c5efe791a217f5aec77adc1e800bdbc1cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.574141 1074625 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key.309b7b8c
	I1026 15:10:45.574173 1074625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt.309b7b8c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1026 15:10:45.602030 1074625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt.309b7b8c ...
	I1026 15:10:45.602063 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt.309b7b8c: {Name:mk3c4f606bb3b01f4ead75fd7c60c12657747164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.602271 1074625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key.309b7b8c ...
	I1026 15:10:45.602294 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key.309b7b8c: {Name:mkb11285270b24fcdbbedfae253bcf6b4adebe83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.602407 1074625 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt.309b7b8c -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt
	I1026 15:10:45.602512 1074625 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key.309b7b8c -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key
	I1026 15:10:45.602603 1074625 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.key
	I1026 15:10:45.602626 1074625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.crt with IP's: []
	I1026 15:10:45.764536 1074625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.crt ...
	I1026 15:10:45.764572 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.crt: {Name:mk1b8448ab2933df1fd6cf4ba85128cd72f09cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.764797 1074625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.key ...
	I1026 15:10:45.764827 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.key: {Name:mka0a1561f58707904c136e3363859092ae2794d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.765044 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:10:45.765082 1074625 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:10:45.765096 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:10:45.765117 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:10:45.765142 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:10:45.765179 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:10:45.765216 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:10:45.765896 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:10:45.785904 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:10:45.805307 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:10:45.824753 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:10:45.843848 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:10:45.863063 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:10:45.882732 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:10:45.902272 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:10:45.923637 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:10:45.944891 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:10:45.963627 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:10:45.981966 1074625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:10:45.995743 1074625 ssh_runner.go:195] Run: openssl version
	I1026 15:10:46.002539 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:10:46.012149 1074625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:10:46.016321 1074625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:10:46.016384 1074625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:10:46.050687 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:10:46.059947 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:10:46.068580 1074625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:46.072700 1074625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:46.072759 1074625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:46.108914 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:10:46.118716 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:10:46.128120 1074625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:10:46.132650 1074625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:10:46.132705 1074625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:10:46.167932 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:10:46.177719 1074625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:10:46.181926 1074625 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:10:46.181982 1074625 kubeadm.go:400] StartCluster: {Name:no-preload-475081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:10:46.182082 1074625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:10:46.182156 1074625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:10:46.211693 1074625 cri.go:89] found id: ""
	I1026 15:10:46.211755 1074625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:10:46.220472 1074625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:10:46.229220 1074625 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:10:46.229277 1074625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:10:46.238052 1074625 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:10:46.238072 1074625 kubeadm.go:157] found existing configuration files:
	
	I1026 15:10:46.238112 1074625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:10:46.246680 1074625 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:10:46.246761 1074625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:10:46.255221 1074625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:10:46.263862 1074625 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:10:46.263939 1074625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:10:46.271979 1074625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:10:46.280209 1074625 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:10:46.280270 1074625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:10:46.288528 1074625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:10:46.297130 1074625 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:10:46.297217 1074625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:10:46.305311 1074625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:10:46.342542 1074625 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:10:46.342630 1074625 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:10:46.365812 1074625 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:10:46.365895 1074625 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:10:46.365948 1074625 kubeadm.go:318] OS: Linux
	I1026 15:10:46.366013 1074625 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:10:46.366084 1074625 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:10:46.366156 1074625 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:10:46.366256 1074625 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:10:46.366327 1074625 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:10:46.366407 1074625 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:10:46.366487 1074625 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:10:46.366550 1074625 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:10:46.434487 1074625 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:10:46.434684 1074625 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:10:46.434850 1074625 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:10:46.449417 1074625 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:10:46.451638 1074625 out.go:252]   - Generating certificates and keys ...
	I1026 15:10:46.451718 1074625 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:10:46.451799 1074625 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:10:46.544980 1074625 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:10:46.942896 1074625 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:10:45.361129 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:45.861002 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:46.361407 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:46.861087 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:47.361108 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:47.861287 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:48.361364 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:48.861614 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:49.360835 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:49.860964 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:47.293244 1074625 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:10:47.587210 1074625 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:10:47.722490 1074625 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:10:47.722658 1074625 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-475081] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1026 15:10:48.073995 1074625 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:10:48.074207 1074625 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-475081] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1026 15:10:48.513259 1074625 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:10:48.879824 1074625 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:10:49.408563 1074625 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:10:49.408631 1074625 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:10:49.740887 1074625 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:10:49.781069 1074625 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:10:50.006512 1074625 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:10:50.126628 1074625 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:10:50.678470 1074625 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:10:50.679149 1074625 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:10:50.684540 1074625 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:10:50.688380 1074625 out.go:252]   - Booting up control plane ...
	I1026 15:10:50.688510 1074625 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:10:50.688604 1074625 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:10:50.688687 1074625 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:10:50.705448 1074625 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:10:50.705644 1074625 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:10:50.714300 1074625 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:10:50.714551 1074625 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:10:50.714643 1074625 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:10:50.827921 1074625 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:10:50.828130 1074625 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:10:51.829638 1074625 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001782547s
	I1026 15:10:51.832732 1074625 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:10:51.832890 1074625 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1026 15:10:51.833025 1074625 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:10:51.833156 1074625 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:10:50.361214 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:50.861506 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:51.361237 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:51.860933 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:52.361411 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:52.861266 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:53.360749 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:53.861343 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:54.360799 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:54.861123 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:52.994362 1074625 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.161540163s
	I1026 15:10:53.884480 1074625 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.051529219s
	I1026 15:10:55.334524 1074625 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501853922s
	I1026 15:10:55.349125 1074625 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:10:55.360947 1074625 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:10:55.383433 1074625 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:10:55.383698 1074625 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-475081 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:10:55.393461 1074625 kubeadm.go:318] [bootstrap-token] Using token: nw95n1.djczsarbkw9vs3el
	I1026 15:10:54.064311 1030092 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.073214314s)
	W1026 15:10:54.064366 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1026 15:10:55.362183 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:55.861389 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:56.360928 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:56.439354 1072816 kubeadm.go:1113] duration metric: took 11.670664157s to wait for elevateKubeSystemPrivileges
	I1026 15:10:56.439391 1072816 kubeadm.go:402] duration metric: took 23.189428634s to StartCluster
	I1026 15:10:56.439415 1072816 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:56.439491 1072816 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:10:56.440806 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:56.441086 1072816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:10:56.441089 1072816 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:10:56.441151 1072816 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:10:56.441307 1072816 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-330914"
	I1026 15:10:56.441336 1072816 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-330914"
	I1026 15:10:56.441337 1072816 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-330914"
	I1026 15:10:56.441356 1072816 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:10:56.441362 1072816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-330914"
	I1026 15:10:56.441373 1072816 host.go:66] Checking if "old-k8s-version-330914" exists ...
	I1026 15:10:56.441763 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:56.442039 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:56.442757 1072816 out.go:179] * Verifying Kubernetes components...
	I1026 15:10:56.444270 1072816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:56.467649 1072816 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-330914"
	I1026 15:10:56.467701 1072816 host.go:66] Checking if "old-k8s-version-330914" exists ...
	I1026 15:10:56.468466 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:56.468484 1072816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:55.394882 1074625 out.go:252]   - Configuring RBAC rules ...
	I1026 15:10:55.395049 1074625 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:10:55.399256 1074625 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:10:55.405568 1074625 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:10:55.408575 1074625 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:10:55.411658 1074625 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:10:55.415608 1074625 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:10:55.740841 1074625 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:10:56.156245 1074625 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:10:56.740762 1074625 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:10:56.741898 1074625 kubeadm.go:318] 
	I1026 15:10:56.741988 1074625 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:10:56.742003 1074625 kubeadm.go:318] 
	I1026 15:10:56.742237 1074625 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:10:56.742268 1074625 kubeadm.go:318] 
	I1026 15:10:56.742306 1074625 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:10:56.742382 1074625 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:10:56.742447 1074625 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:10:56.742483 1074625 kubeadm.go:318] 
	I1026 15:10:56.742567 1074625 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:10:56.742581 1074625 kubeadm.go:318] 
	I1026 15:10:56.742638 1074625 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:10:56.742649 1074625 kubeadm.go:318] 
	I1026 15:10:56.742717 1074625 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:10:56.742983 1074625 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:10:56.743113 1074625 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:10:56.743125 1074625 kubeadm.go:318] 
	I1026 15:10:56.743289 1074625 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:10:56.743395 1074625 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:10:56.743406 1074625 kubeadm.go:318] 
	I1026 15:10:56.743523 1074625 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token nw95n1.djczsarbkw9vs3el \
	I1026 15:10:56.743675 1074625 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 15:10:56.743719 1074625 kubeadm.go:318] 	--control-plane 
	I1026 15:10:56.743729 1074625 kubeadm.go:318] 
	I1026 15:10:56.743850 1074625 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:10:56.743861 1074625 kubeadm.go:318] 
	I1026 15:10:56.743976 1074625 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token nw95n1.djczsarbkw9vs3el \
	I1026 15:10:56.744123 1074625 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 15:10:56.746845 1074625 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 15:10:56.747008 1074625 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:10:56.747040 1074625 cni.go:84] Creating CNI manager for ""
	I1026 15:10:56.747050 1074625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:56.750026 1074625 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:10:56.751356 1074625 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:10:56.757248 1074625 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:10:56.757273 1074625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:10:56.774800 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:10:56.469932 1072816 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:10:56.469955 1072816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:10:56.470019 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:56.504723 1072816 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:10:56.504751 1072816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:10:56.504829 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:56.515311 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:56.538782 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:56.554662 1072816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:10:56.602563 1072816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:10:56.641263 1072816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:10:56.660140 1072816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:10:56.799357 1072816 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 15:10:56.800805 1072816 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-330914" to be "Ready" ...
	I1026 15:10:57.119527 1072816 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:10:57.120858 1072816 addons.go:514] duration metric: took 679.703896ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:10:57.304771 1072816 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-330914" context rescaled to 1 replicas
	W1026 15:10:58.804053 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	I1026 15:10:56.564521 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:57.097500 1074625 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:10:57.097594 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:57.097690 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-475081 minikube.k8s.io/updated_at=2025_10_26T15_10_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=no-preload-475081 minikube.k8s.io/primary=true
	I1026 15:10:57.113722 1074625 ops.go:34] apiserver oom_adj: -16
	I1026 15:10:57.197288 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:57.697812 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:58.197426 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:58.698241 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:59.197487 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:59.697418 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:00.197572 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:00.697972 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:01.197817 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:01.698041 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:01.775435 1074625 kubeadm.go:1113] duration metric: took 4.677917592s to wait for elevateKubeSystemPrivileges
	I1026 15:11:01.775471 1074625 kubeadm.go:402] duration metric: took 15.59349307s to StartCluster
	I1026 15:11:01.775495 1074625 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:11:01.775575 1074625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:11:01.776938 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:11:01.777226 1074625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:11:01.777236 1074625 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:11:01.777303 1074625 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:11:01.777422 1074625 addons.go:69] Setting storage-provisioner=true in profile "no-preload-475081"
	I1026 15:11:01.777437 1074625 addons.go:69] Setting default-storageclass=true in profile "no-preload-475081"
	I1026 15:11:01.777458 1074625 config.go:182] Loaded profile config "no-preload-475081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:11:01.777465 1074625 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-475081"
	I1026 15:11:01.777443 1074625 addons.go:238] Setting addon storage-provisioner=true in "no-preload-475081"
	I1026 15:11:01.777528 1074625 host.go:66] Checking if "no-preload-475081" exists ...
	I1026 15:11:01.777838 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:11:01.778079 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:11:01.779866 1074625 out.go:179] * Verifying Kubernetes components...
	I1026 15:11:01.782268 1074625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:11:01.808631 1074625 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:11:01.809013 1074625 addons.go:238] Setting addon default-storageclass=true in "no-preload-475081"
	I1026 15:11:01.809066 1074625 host.go:66] Checking if "no-preload-475081" exists ...
	I1026 15:11:01.809679 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:11:01.810802 1074625 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:11:01.810831 1074625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:11:01.810897 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:11:01.842886 1074625 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:11:01.842913 1074625 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:11:01.842981 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:11:01.844290 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:11:01.873435 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:11:01.901089 1074625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:11:01.977866 1074625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:11:01.983106 1074625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:11:01.994377 1074625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:11:02.102811 1074625 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1026 15:11:02.103807 1074625 node_ready.go:35] waiting up to 6m0s for node "no-preload-475081" to be "Ready" ...
	I1026 15:11:02.327018 1074625 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1026 15:11:01.304149 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	W1026 15:11:03.304444 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	I1026 15:11:01.565457 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 15:11:01.565526 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:11:01.565592 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:11:01.595320 1030092 cri.go:89] found id: "0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:01.595349 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:11:01.595355 1030092 cri.go:89] found id: ""
	I1026 15:11:01.595366 1030092 logs.go:282] 2 containers: [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:11:01.595433 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.599897 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.604236 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:11:01.604320 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:11:01.638070 1030092 cri.go:89] found id: ""
	I1026 15:11:01.638100 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.638113 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:11:01.638121 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:11:01.638265 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:11:01.671222 1030092 cri.go:89] found id: ""
	I1026 15:11:01.671258 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.671269 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:11:01.671288 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:11:01.671367 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:11:01.702120 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:01.702154 1030092 cri.go:89] found id: ""
	I1026 15:11:01.702180 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:11:01.702245 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.707228 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:11:01.707304 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:11:01.740378 1030092 cri.go:89] found id: ""
	I1026 15:11:01.740410 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.740422 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:11:01.740430 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:11:01.740490 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:11:01.773069 1030092 cri.go:89] found id: "51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:01.773102 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:01.773107 1030092 cri.go:89] found id: ""
	I1026 15:11:01.773118 1030092 logs.go:282] 2 containers: [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:11:01.773198 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.777963 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.783689 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:11:01.783782 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:11:01.831997 1030092 cri.go:89] found id: ""
	I1026 15:11:01.832030 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.832042 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:11:01.832050 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:11:01.832121 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:11:01.882822 1030092 cri.go:89] found id: ""
	I1026 15:11:01.882853 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.882923 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:11:01.882949 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:11:01.882965 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:11:02.004970 1030092 logs.go:123] Gathering logs for kube-apiserver [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703] ...
	I1026 15:11:02.005012 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:02.056875 1030092 logs.go:123] Gathering logs for kube-controller-manager [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b] ...
	I1026 15:11:02.056922 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:02.091785 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:11:02.091833 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:02.130241 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:11:02.130278 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:11:02.152749 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:11:02.152786 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 15:11:02.328238 1074625 addons.go:514] duration metric: took 550.948627ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:11:02.607897 1074625 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-475081" context rescaled to 1 replicas
	W1026 15:11:04.107787 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	W1026 15:11:06.606743 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	W1026 15:11:05.804721 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	W1026 15:11:08.304346 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	I1026 15:11:09.803688 1072816 node_ready.go:49] node "old-k8s-version-330914" is "Ready"
	I1026 15:11:09.803716 1072816 node_ready.go:38] duration metric: took 13.00287656s for node "old-k8s-version-330914" to be "Ready" ...
	I1026 15:11:09.803732 1072816 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:11:09.803798 1072816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:11:09.816563 1072816 api_server.go:72] duration metric: took 13.375438769s to wait for apiserver process to appear ...
	I1026 15:11:09.816590 1072816 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:11:09.816611 1072816 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:11:09.820927 1072816 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 15:11:09.822208 1072816 api_server.go:141] control plane version: v1.28.0
	I1026 15:11:09.822238 1072816 api_server.go:131] duration metric: took 5.639605ms to wait for apiserver health ...
	I1026 15:11:09.822250 1072816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:11:09.828117 1072816 system_pods.go:59] 8 kube-system pods found
	I1026 15:11:09.828153 1072816 system_pods.go:61] "coredns-5dd5756b68-hzjqn" [21211baf-4153-41c8-aacc-6d313dcdef82] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:09.828159 1072816 system_pods.go:61] "etcd-old-k8s-version-330914" [cb37b501-d930-4a0e-8e96-9aa97fbfef91] Running
	I1026 15:11:09.828176 1072816 system_pods.go:61] "kindnet-b8hhx" [522edddb-fb4b-4e11-a49f-48843f236bab] Running
	I1026 15:11:09.828180 1072816 system_pods.go:61] "kube-apiserver-old-k8s-version-330914" [d1f54bcd-dcc1-4654-90ab-765846ebeaf7] Running
	I1026 15:11:09.828185 1072816 system_pods.go:61] "kube-controller-manager-old-k8s-version-330914" [73822523-0f7b-41ad-a7ed-5cf10ec4480a] Running
	I1026 15:11:09.828188 1072816 system_pods.go:61] "kube-proxy-829lp" [b212cf79-e2d5-49ef-9e66-80ffcd18774f] Running
	I1026 15:11:09.828192 1072816 system_pods.go:61] "kube-scheduler-old-k8s-version-330914" [3b01ee94-ea99-49d9-9a73-e2cba374721f] Running
	I1026 15:11:09.828197 1072816 system_pods.go:61] "storage-provisioner" [d505b114-6834-4c0b-858b-a785482ca1ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:09.828204 1072816 system_pods.go:74] duration metric: took 5.946507ms to wait for pod list to return data ...
	I1026 15:11:09.828215 1072816 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:11:09.830701 1072816 default_sa.go:45] found service account: "default"
	I1026 15:11:09.830739 1072816 default_sa.go:55] duration metric: took 2.516755ms for default service account to be created ...
	I1026 15:11:09.830751 1072816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:11:09.833947 1072816 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:09.833980 1072816 system_pods.go:89] "coredns-5dd5756b68-hzjqn" [21211baf-4153-41c8-aacc-6d313dcdef82] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:09.833986 1072816 system_pods.go:89] "etcd-old-k8s-version-330914" [cb37b501-d930-4a0e-8e96-9aa97fbfef91] Running
	I1026 15:11:09.833993 1072816 system_pods.go:89] "kindnet-b8hhx" [522edddb-fb4b-4e11-a49f-48843f236bab] Running
	I1026 15:11:09.834001 1072816 system_pods.go:89] "kube-apiserver-old-k8s-version-330914" [d1f54bcd-dcc1-4654-90ab-765846ebeaf7] Running
	I1026 15:11:09.834008 1072816 system_pods.go:89] "kube-controller-manager-old-k8s-version-330914" [73822523-0f7b-41ad-a7ed-5cf10ec4480a] Running
	I1026 15:11:09.834012 1072816 system_pods.go:89] "kube-proxy-829lp" [b212cf79-e2d5-49ef-9e66-80ffcd18774f] Running
	I1026 15:11:09.834015 1072816 system_pods.go:89] "kube-scheduler-old-k8s-version-330914" [3b01ee94-ea99-49d9-9a73-e2cba374721f] Running
	I1026 15:11:09.834020 1072816 system_pods.go:89] "storage-provisioner" [d505b114-6834-4c0b-858b-a785482ca1ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:09.834048 1072816 retry.go:31] will retry after 262.214269ms: missing components: kube-dns
	I1026 15:11:10.100403 1072816 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:10.100434 1072816 system_pods.go:89] "coredns-5dd5756b68-hzjqn" [21211baf-4153-41c8-aacc-6d313dcdef82] Running
	I1026 15:11:10.100443 1072816 system_pods.go:89] "etcd-old-k8s-version-330914" [cb37b501-d930-4a0e-8e96-9aa97fbfef91] Running
	I1026 15:11:10.100448 1072816 system_pods.go:89] "kindnet-b8hhx" [522edddb-fb4b-4e11-a49f-48843f236bab] Running
	I1026 15:11:10.100453 1072816 system_pods.go:89] "kube-apiserver-old-k8s-version-330914" [d1f54bcd-dcc1-4654-90ab-765846ebeaf7] Running
	I1026 15:11:10.100461 1072816 system_pods.go:89] "kube-controller-manager-old-k8s-version-330914" [73822523-0f7b-41ad-a7ed-5cf10ec4480a] Running
	I1026 15:11:10.100465 1072816 system_pods.go:89] "kube-proxy-829lp" [b212cf79-e2d5-49ef-9e66-80ffcd18774f] Running
	I1026 15:11:10.100470 1072816 system_pods.go:89] "kube-scheduler-old-k8s-version-330914" [3b01ee94-ea99-49d9-9a73-e2cba374721f] Running
	I1026 15:11:10.100474 1072816 system_pods.go:89] "storage-provisioner" [d505b114-6834-4c0b-858b-a785482ca1ec] Running
	I1026 15:11:10.100485 1072816 system_pods.go:126] duration metric: took 269.725842ms to wait for k8s-apps to be running ...
	I1026 15:11:10.100500 1072816 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:11:10.100551 1072816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:11:10.114894 1072816 system_svc.go:56] duration metric: took 14.384166ms WaitForService to wait for kubelet
	I1026 15:11:10.114921 1072816 kubeadm.go:586] duration metric: took 13.6738053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:11:10.114939 1072816 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:11:10.117463 1072816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:11:10.117488 1072816 node_conditions.go:123] node cpu capacity is 8
	I1026 15:11:10.117503 1072816 node_conditions.go:105] duration metric: took 2.559987ms to run NodePressure ...
	I1026 15:11:10.117516 1072816 start.go:241] waiting for startup goroutines ...
	I1026 15:11:10.117523 1072816 start.go:246] waiting for cluster config update ...
	I1026 15:11:10.117533 1072816 start.go:255] writing updated cluster config ...
	I1026 15:11:10.117783 1072816 ssh_runner.go:195] Run: rm -f paused
	I1026 15:11:10.121775 1072816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:10.125920 1072816 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-hzjqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.130518 1072816 pod_ready.go:94] pod "coredns-5dd5756b68-hzjqn" is "Ready"
	I1026 15:11:10.130544 1072816 pod_ready.go:86] duration metric: took 4.603177ms for pod "coredns-5dd5756b68-hzjqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.133391 1072816 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.137337 1072816 pod_ready.go:94] pod "etcd-old-k8s-version-330914" is "Ready"
	I1026 15:11:10.137356 1072816 pod_ready.go:86] duration metric: took 3.942349ms for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.140016 1072816 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.144535 1072816 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-330914" is "Ready"
	I1026 15:11:10.144557 1072816 pod_ready.go:86] duration metric: took 4.519342ms for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.147293 1072816 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:05.612330 1030092 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.459518512s)
	W1026 15:11:05.612379 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60576->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60576->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1026 15:11:05.612392 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:11:05.612409 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	W1026 15:11:05.640274 1030092 logs.go:130] failed kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c": Process exited with status 1
	stdout:
	
	stderr:
	E1026 15:11:05.637593    5077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c\": container with ID starting with a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c not found: ID does not exist" containerID="a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	time="2025-10-26T15:11:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c\": container with ID starting with a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c not found: ID does not exist"
	 output: 
	** stderr ** 
	E1026 15:11:05.637593    5077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c\": container with ID starting with a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c not found: ID does not exist" containerID="a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	time="2025-10-26T15:11:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c\": container with ID starting with a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c not found: ID does not exist"
	
	** /stderr **
	I1026 15:11:05.640299 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:11:05.640313 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:05.695066 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:11:05.695104 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:11:05.754458 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:11:05.754496 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:11:08.287645 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:11:08.288120 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:11:08.288229 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:11:08.288297 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:11:08.317543 1030092 cri.go:89] found id: "0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:08.317572 1030092 cri.go:89] found id: ""
	I1026 15:11:08.317581 1030092 logs.go:282] 1 containers: [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703]
	I1026 15:11:08.317644 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:08.321906 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:11:08.321980 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:11:08.349672 1030092 cri.go:89] found id: ""
	I1026 15:11:08.349701 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.349712 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:11:08.349720 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:11:08.349780 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:11:08.378612 1030092 cri.go:89] found id: ""
	I1026 15:11:08.378636 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.378643 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:11:08.378648 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:11:08.378695 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:11:08.407328 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:08.407354 1030092 cri.go:89] found id: ""
	I1026 15:11:08.407363 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:11:08.407417 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:08.411875 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:11:08.411950 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:11:08.439929 1030092 cri.go:89] found id: ""
	I1026 15:11:08.439958 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.439968 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:11:08.439975 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:11:08.440045 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:11:08.469571 1030092 cri.go:89] found id: "51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:08.469592 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:08.469595 1030092 cri.go:89] found id: ""
	I1026 15:11:08.469604 1030092 logs.go:282] 2 containers: [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:11:08.469657 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:08.474409 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:08.478499 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:11:08.478575 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:11:08.506729 1030092 cri.go:89] found id: ""
	I1026 15:11:08.506756 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.506764 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:11:08.506771 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:11:08.506834 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:11:08.535880 1030092 cri.go:89] found id: ""
	I1026 15:11:08.535904 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.535919 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:11:08.535934 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:11:08.535946 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:11:08.552721 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:11:08.552751 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:11:08.612471 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:11:08.612495 1030092 logs.go:123] Gathering logs for kube-apiserver [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703] ...
	I1026 15:11:08.612512 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:08.646548 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:11:08.646586 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:08.675694 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:11:08.675727 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:11:08.733251 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:11:08.733284 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:11:08.830310 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:11:08.830352 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:08.883994 1030092 logs.go:123] Gathering logs for kube-controller-manager [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b] ...
	I1026 15:11:08.884033 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:08.917368 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:11:08.917408 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:11:10.526539 1072816 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-330914" is "Ready"
	I1026 15:11:10.526570 1072816 pod_ready.go:86] duration metric: took 379.257211ms for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.726283 1072816 pod_ready.go:83] waiting for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:11.126673 1072816 pod_ready.go:94] pod "kube-proxy-829lp" is "Ready"
	I1026 15:11:11.126698 1072816 pod_ready.go:86] duration metric: took 400.390007ms for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:11.326427 1072816 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:11.725615 1072816 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-330914" is "Ready"
	I1026 15:11:11.725651 1072816 pod_ready.go:86] duration metric: took 399.197469ms for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:11.725668 1072816 pod_ready.go:40] duration metric: took 1.603861334s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:11.774782 1072816 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1026 15:11:11.776554 1072816 out.go:203] 
	W1026 15:11:11.777833 1072816 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:11:11.779133 1072816 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:11:11.780349 1072816 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-330914" cluster and "default" namespace by default
	W1026 15:11:08.607057 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	W1026 15:11:11.107346 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	I1026 15:11:11.450586 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:11:11.451209 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:11:11.451282 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:11:11.451347 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:11:11.480980 1030092 cri.go:89] found id: "0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:11.481008 1030092 cri.go:89] found id: ""
	I1026 15:11:11.481018 1030092 logs.go:282] 1 containers: [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703]
	I1026 15:11:11.481081 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:11.485354 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:11:11.485429 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:11:11.513983 1030092 cri.go:89] found id: ""
	I1026 15:11:11.514011 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.514024 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:11:11.514031 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:11:11.514094 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:11:11.546131 1030092 cri.go:89] found id: ""
	I1026 15:11:11.546157 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.546182 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:11:11.546190 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:11:11.546259 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:11:11.575322 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:11.575346 1030092 cri.go:89] found id: ""
	I1026 15:11:11.575356 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:11:11.575425 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:11.579711 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:11:11.579801 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:11:11.609378 1030092 cri.go:89] found id: ""
	I1026 15:11:11.609405 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.609415 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:11:11.609423 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:11:11.609485 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:11:11.637703 1030092 cri.go:89] found id: "51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:11.637729 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:11.637734 1030092 cri.go:89] found id: ""
	I1026 15:11:11.637745 1030092 logs.go:282] 2 containers: [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:11:11.637818 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:11.642074 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:11.646190 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:11:11.646262 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:11:11.676915 1030092 cri.go:89] found id: ""
	I1026 15:11:11.676943 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.676953 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:11:11.676959 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:11:11.677007 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:11:11.707840 1030092 cri.go:89] found id: ""
	I1026 15:11:11.707869 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.707878 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:11:11.707893 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:11:11.707904 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:11:11.767156 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:11:11.767201 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:11:11.805157 1030092 logs.go:123] Gathering logs for kube-controller-manager [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b] ...
	I1026 15:11:11.805204 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:11.834187 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:11:11.834227 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:11.865256 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:11:11.865286 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:11:11.973050 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:11:11.973090 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:11:11.990773 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:11:11.990817 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:11:12.050222 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:11:12.050252 1030092 logs.go:123] Gathering logs for kube-apiserver [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703] ...
	I1026 15:11:12.050271 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:12.085010 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:11:12.085054 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:14.644225 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:11:14.644669 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:11:14.644729 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:11:14.644794 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:11:14.676017 1030092 cri.go:89] found id: "0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:14.676042 1030092 cri.go:89] found id: ""
	I1026 15:11:14.676053 1030092 logs.go:282] 1 containers: [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703]
	I1026 15:11:14.676114 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:14.680608 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:11:14.680688 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:11:14.711808 1030092 cri.go:89] found id: ""
	I1026 15:11:14.711845 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.711856 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:11:14.711863 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:11:14.711931 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:11:14.741627 1030092 cri.go:89] found id: ""
	I1026 15:11:14.741657 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.741667 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:11:14.741675 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:11:14.741724 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:11:14.769937 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:14.769964 1030092 cri.go:89] found id: ""
	I1026 15:11:14.769976 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:11:14.770028 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:14.774432 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:11:14.774500 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:11:14.803136 1030092 cri.go:89] found id: ""
	I1026 15:11:14.803206 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.803221 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:11:14.803234 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:11:14.803297 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:11:14.832287 1030092 cri.go:89] found id: "51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:14.832314 1030092 cri.go:89] found id: ""
	I1026 15:11:14.832325 1030092 logs.go:282] 1 containers: [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b]
	I1026 15:11:14.832386 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:14.836724 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:11:14.836797 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:11:14.865490 1030092 cri.go:89] found id: ""
	I1026 15:11:14.865535 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.865547 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:11:14.865555 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:11:14.865623 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:11:14.894500 1030092 cri.go:89] found id: ""
	I1026 15:11:14.894526 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.894534 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:11:14.894544 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:11:14.894557 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:14.950065 1030092 logs.go:123] Gathering logs for kube-controller-manager [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b] ...
	I1026 15:11:14.950107 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:14.980621 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:11:14.980655 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:11:15.039070 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:11:15.039110 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:11:15.075882 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:11:15.075929 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:11:15.171642 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:11:15.171678 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:11:15.188099 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:11:15.188131 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:11:15.246139 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:11:15.246178 1030092 logs.go:123] Gathering logs for kube-apiserver [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703] ...
	I1026 15:11:15.246195 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	W1026 15:11:13.607101 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	I1026 15:11:14.607464 1074625 node_ready.go:49] node "no-preload-475081" is "Ready"
	I1026 15:11:14.607495 1074625 node_ready.go:38] duration metric: took 12.503660845s for node "no-preload-475081" to be "Ready" ...
	I1026 15:11:14.607512 1074625 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:11:14.607596 1074625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:11:14.625506 1074625 api_server.go:72] duration metric: took 12.848228717s to wait for apiserver process to appear ...
	I1026 15:11:14.625538 1074625 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:11:14.625561 1074625 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:11:14.630694 1074625 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 15:11:14.631772 1074625 api_server.go:141] control plane version: v1.34.1
	I1026 15:11:14.631802 1074625 api_server.go:131] duration metric: took 6.25545ms to wait for apiserver health ...
	I1026 15:11:14.631814 1074625 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:11:14.637467 1074625 system_pods.go:59] 8 kube-system pods found
	I1026 15:11:14.637510 1074625 system_pods.go:61] "coredns-66bc5c9577-knr22" [4ba1a7ff-bfea-43bf-b65e-2b1309709ac4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:14.637525 1074625 system_pods.go:61] "etcd-no-preload-475081" [517b1527-7a3f-4127-8911-3d16611e2468] Running
	I1026 15:11:14.637533 1074625 system_pods.go:61] "kindnet-7cnvx" [e36c0ce9-7f97-4d93-a199-92e1d130eb0b] Running
	I1026 15:11:14.637539 1074625 system_pods.go:61] "kube-apiserver-no-preload-475081" [ee5497a8-ed40-496e-b36e-370bb14c3fad] Running
	I1026 15:11:14.637545 1074625 system_pods.go:61] "kube-controller-manager-no-preload-475081" [9bae4029-8dac-4406-83b1-6318c3ea749c] Running
	I1026 15:11:14.637550 1074625 system_pods.go:61] "kube-proxy-smtlg" [5b84f479-f0f8-4260-bd71-ce14b36bae0d] Running
	I1026 15:11:14.637568 1074625 system_pods.go:61] "kube-scheduler-no-preload-475081" [db077d81-a03f-4886-b004-749606cfcdca] Running
	I1026 15:11:14.637575 1074625 system_pods.go:61] "storage-provisioner" [15518fa4-cf2c-44fe-8b16-e222dcbae51f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:14.637584 1074625 system_pods.go:74] duration metric: took 5.762278ms to wait for pod list to return data ...
	I1026 15:11:14.637596 1074625 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:11:14.641112 1074625 default_sa.go:45] found service account: "default"
	I1026 15:11:14.641144 1074625 default_sa.go:55] duration metric: took 3.540551ms for default service account to be created ...
	I1026 15:11:14.641155 1074625 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:11:14.736917 1074625 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:14.736956 1074625 system_pods.go:89] "coredns-66bc5c9577-knr22" [4ba1a7ff-bfea-43bf-b65e-2b1309709ac4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:14.736965 1074625 system_pods.go:89] "etcd-no-preload-475081" [517b1527-7a3f-4127-8911-3d16611e2468] Running
	I1026 15:11:14.736974 1074625 system_pods.go:89] "kindnet-7cnvx" [e36c0ce9-7f97-4d93-a199-92e1d130eb0b] Running
	I1026 15:11:14.736980 1074625 system_pods.go:89] "kube-apiserver-no-preload-475081" [ee5497a8-ed40-496e-b36e-370bb14c3fad] Running
	I1026 15:11:14.736986 1074625 system_pods.go:89] "kube-controller-manager-no-preload-475081" [9bae4029-8dac-4406-83b1-6318c3ea749c] Running
	I1026 15:11:14.737004 1074625 system_pods.go:89] "kube-proxy-smtlg" [5b84f479-f0f8-4260-bd71-ce14b36bae0d] Running
	I1026 15:11:14.737013 1074625 system_pods.go:89] "kube-scheduler-no-preload-475081" [db077d81-a03f-4886-b004-749606cfcdca] Running
	I1026 15:11:14.737020 1074625 system_pods.go:89] "storage-provisioner" [15518fa4-cf2c-44fe-8b16-e222dcbae51f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:14.737056 1074625 retry.go:31] will retry after 219.983144ms: missing components: kube-dns
	I1026 15:11:14.961636 1074625 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:14.961668 1074625 system_pods.go:89] "coredns-66bc5c9577-knr22" [4ba1a7ff-bfea-43bf-b65e-2b1309709ac4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:14.961675 1074625 system_pods.go:89] "etcd-no-preload-475081" [517b1527-7a3f-4127-8911-3d16611e2468] Running
	I1026 15:11:14.961683 1074625 system_pods.go:89] "kindnet-7cnvx" [e36c0ce9-7f97-4d93-a199-92e1d130eb0b] Running
	I1026 15:11:14.961696 1074625 system_pods.go:89] "kube-apiserver-no-preload-475081" [ee5497a8-ed40-496e-b36e-370bb14c3fad] Running
	I1026 15:11:14.961700 1074625 system_pods.go:89] "kube-controller-manager-no-preload-475081" [9bae4029-8dac-4406-83b1-6318c3ea749c] Running
	I1026 15:11:14.961703 1074625 system_pods.go:89] "kube-proxy-smtlg" [5b84f479-f0f8-4260-bd71-ce14b36bae0d] Running
	I1026 15:11:14.961706 1074625 system_pods.go:89] "kube-scheduler-no-preload-475081" [db077d81-a03f-4886-b004-749606cfcdca] Running
	I1026 15:11:14.961710 1074625 system_pods.go:89] "storage-provisioner" [15518fa4-cf2c-44fe-8b16-e222dcbae51f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:14.961727 1074625 retry.go:31] will retry after 370.983761ms: missing components: kube-dns
	I1026 15:11:15.337320 1074625 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:15.337350 1074625 system_pods.go:89] "coredns-66bc5c9577-knr22" [4ba1a7ff-bfea-43bf-b65e-2b1309709ac4] Running
	I1026 15:11:15.337356 1074625 system_pods.go:89] "etcd-no-preload-475081" [517b1527-7a3f-4127-8911-3d16611e2468] Running
	I1026 15:11:15.337363 1074625 system_pods.go:89] "kindnet-7cnvx" [e36c0ce9-7f97-4d93-a199-92e1d130eb0b] Running
	I1026 15:11:15.337367 1074625 system_pods.go:89] "kube-apiserver-no-preload-475081" [ee5497a8-ed40-496e-b36e-370bb14c3fad] Running
	I1026 15:11:15.337371 1074625 system_pods.go:89] "kube-controller-manager-no-preload-475081" [9bae4029-8dac-4406-83b1-6318c3ea749c] Running
	I1026 15:11:15.337374 1074625 system_pods.go:89] "kube-proxy-smtlg" [5b84f479-f0f8-4260-bd71-ce14b36bae0d] Running
	I1026 15:11:15.337377 1074625 system_pods.go:89] "kube-scheduler-no-preload-475081" [db077d81-a03f-4886-b004-749606cfcdca] Running
	I1026 15:11:15.337380 1074625 system_pods.go:89] "storage-provisioner" [15518fa4-cf2c-44fe-8b16-e222dcbae51f] Running
	I1026 15:11:15.337388 1074625 system_pods.go:126] duration metric: took 696.20329ms to wait for k8s-apps to be running ...
	I1026 15:11:15.337395 1074625 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:11:15.337453 1074625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:11:15.351058 1074625 system_svc.go:56] duration metric: took 13.652446ms WaitForService to wait for kubelet
	I1026 15:11:15.351086 1074625 kubeadm.go:586] duration metric: took 13.573820317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:11:15.351104 1074625 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:11:15.353841 1074625 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:11:15.353865 1074625 node_conditions.go:123] node cpu capacity is 8
	I1026 15:11:15.353889 1074625 node_conditions.go:105] duration metric: took 2.780465ms to run NodePressure ...
	I1026 15:11:15.353901 1074625 start.go:241] waiting for startup goroutines ...
	I1026 15:11:15.353910 1074625 start.go:246] waiting for cluster config update ...
	I1026 15:11:15.353922 1074625 start.go:255] writing updated cluster config ...
	I1026 15:11:15.354188 1074625 ssh_runner.go:195] Run: rm -f paused
	I1026 15:11:15.358267 1074625 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:15.361450 1074625 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-knr22" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.365648 1074625 pod_ready.go:94] pod "coredns-66bc5c9577-knr22" is "Ready"
	I1026 15:11:15.365671 1074625 pod_ready.go:86] duration metric: took 4.19882ms for pod "coredns-66bc5c9577-knr22" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.367814 1074625 pod_ready.go:83] waiting for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.371397 1074625 pod_ready.go:94] pod "etcd-no-preload-475081" is "Ready"
	I1026 15:11:15.371416 1074625 pod_ready.go:86] duration metric: took 3.581783ms for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.373200 1074625 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.376591 1074625 pod_ready.go:94] pod "kube-apiserver-no-preload-475081" is "Ready"
	I1026 15:11:15.376613 1074625 pod_ready.go:86] duration metric: took 3.391538ms for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.378391 1074625 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.762605 1074625 pod_ready.go:94] pod "kube-controller-manager-no-preload-475081" is "Ready"
	I1026 15:11:15.762631 1074625 pod_ready.go:86] duration metric: took 384.221212ms for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.962635 1074625 pod_ready.go:83] waiting for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:16.362127 1074625 pod_ready.go:94] pod "kube-proxy-smtlg" is "Ready"
	I1026 15:11:16.362153 1074625 pod_ready.go:86] duration metric: took 399.494041ms for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:16.562828 1074625 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:16.962134 1074625 pod_ready.go:94] pod "kube-scheduler-no-preload-475081" is "Ready"
	I1026 15:11:16.962197 1074625 pod_ready.go:86] duration metric: took 399.305825ms for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:16.962218 1074625 pod_ready.go:40] duration metric: took 1.603926195s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:17.009485 1074625 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:11:17.011155 1074625 out.go:179] * Done! kubectl is now configured to use "no-preload-475081" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:11:09 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:09.688641209Z" level=info msg="Started container" PID=2136 containerID=c5fd66710b3ad2a9d721698a50bc5200e76be8be55001c36289619a1566d3429 description=kube-system/coredns-5dd5756b68-hzjqn/coredns id=77cdcb3f-75f3-4f93-bbe7-509ed573bb62 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f18389696f848b62780faf93164b37cf6e69f0eeda7c15384c267f6fde6ed86
	Oct 26 15:11:09 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:09.68910527Z" level=info msg="Started container" PID=2135 containerID=f77b0f4bdf929513b5b4bd73ea00a6272ea5b6794be1fb7af4d07ecdee11258a description=kube-system/storage-provisioner/storage-provisioner id=811e08ae-1d5e-4a51-a6ad-24b65b222ccf name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0354ca3191d7f2f196794796a8343acc34b8b5b997012ec0627a26a2ba2f664
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.243685662Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3a0816e3-567f-4432-8535-272f32926b1e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.243773979Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.248830702Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6f660197ad43a48d1dfe0e4f596d024da43c744ce6c3d63612a7bb0069a8288e UID:fe9e7662-687b-457e-a57c-49441e024bbe NetNS:/var/run/netns/432f34fa-ca2c-4810-aaa0-9ab46a54dd05 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c8ee70}] Aliases:map[]}"
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.248869629Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.259474967Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6f660197ad43a48d1dfe0e4f596d024da43c744ce6c3d63612a7bb0069a8288e UID:fe9e7662-687b-457e-a57c-49441e024bbe NetNS:/var/run/netns/432f34fa-ca2c-4810-aaa0-9ab46a54dd05 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c8ee70}] Aliases:map[]}"
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.259614487Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.260619354Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.261725889Z" level=info msg="Ran pod sandbox 6f660197ad43a48d1dfe0e4f596d024da43c744ce6c3d63612a7bb0069a8288e with infra container: default/busybox/POD" id=3a0816e3-567f-4432-8535-272f32926b1e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.263137101Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=41cea8da-f288-4a5b-b749-d80133cba8ad name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.263323001Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=41cea8da-f288-4a5b-b749-d80133cba8ad name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.263372256Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=41cea8da-f288-4a5b-b749-d80133cba8ad name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.263921243Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=62ce6bac-9ad0-437b-8402-f43b838ca09e name=/runtime.v1.ImageService/PullImage
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.26543387Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.982344446Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=62ce6bac-9ad0-437b-8402-f43b838ca09e name=/runtime.v1.ImageService/PullImage
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.983211496Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6be4abfd-dc1f-499e-acb4-18492ae9bc5c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.984527143Z" level=info msg="Creating container: default/busybox/busybox" id=897d518f-4396-482c-822b-95cb0688fe06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.984643119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.987988783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:11:12 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:12.988422597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:11:13 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:13.022809243Z" level=info msg="Created container 0612852b6d9265705175fc5917a9e2bdb1c406818ad763b58ec1c1e3daf9f744: default/busybox/busybox" id=897d518f-4396-482c-822b-95cb0688fe06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:11:13 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:13.02341036Z" level=info msg="Starting container: 0612852b6d9265705175fc5917a9e2bdb1c406818ad763b58ec1c1e3daf9f744" id=1bb59ccd-b21d-4581-8c3c-96b5c7bfb22a name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:11:13 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:13.025234589Z" level=info msg="Started container" PID=2212 containerID=0612852b6d9265705175fc5917a9e2bdb1c406818ad763b58ec1c1e3daf9f744 description=default/busybox/busybox id=1bb59ccd-b21d-4581-8c3c-96b5c7bfb22a name=/runtime.v1.RuntimeService/StartContainer sandboxID=6f660197ad43a48d1dfe0e4f596d024da43c744ce6c3d63612a7bb0069a8288e
	Oct 26 15:11:19 old-k8s-version-330914 crio[772]: time="2025-10-26T15:11:19.028861362Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	0612852b6d926       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   6f660197ad43a       busybox                                          default
	c5fd66710b3ad       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      10 seconds ago      Running             coredns                   0                   8f18389696f84       coredns-5dd5756b68-hzjqn                         kube-system
	f77b0f4bdf929       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   c0354ca3191d7       storage-provisioner                              kube-system
	813b3d201beb3       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   060fb2eb4de2e       kindnet-b8hhx                                    kube-system
	6e0f992c119e5       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      23 seconds ago      Running             kube-proxy                0                   45a2d89c9d877       kube-proxy-829lp                                 kube-system
	fa00841ff06b0       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      41 seconds ago      Running             kube-controller-manager   0                   6ffca6fe4572c       kube-controller-manager-old-k8s-version-330914   kube-system
	9b070d53540dd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      41 seconds ago      Running             etcd                      0                   32030103dc6d0       etcd-old-k8s-version-330914                      kube-system
	320f6f21c9f95       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      41 seconds ago      Running             kube-apiserver            0                   159a3d90436d2       kube-apiserver-old-k8s-version-330914            kube-system
	278ae65f9a838       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      41 seconds ago      Running             kube-scheduler            0                   bc5a58a8c4bc3       kube-scheduler-old-k8s-version-330914            kube-system
	
	
	==> coredns [c5fd66710b3ad2a9d721698a50bc5200e76be8be55001c36289619a1566d3429] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35097 - 30497 "HINFO IN 2266432907948493669.7983682611610680814. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.094521489s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-330914
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-330914
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=old-k8s-version-330914
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_10_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:10:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-330914
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:11:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:11:14 +0000   Sun, 26 Oct 2025 15:10:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:11:14 +0000   Sun, 26 Oct 2025 15:10:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:11:14 +0000   Sun, 26 Oct 2025 15:10:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:11:14 +0000   Sun, 26 Oct 2025 15:11:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-330914
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7b3315c3-b9ce-4fbb-a096-582c49bc7b55
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-hzjqn                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-old-k8s-version-330914                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-b8hhx                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-old-k8s-version-330914             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-old-k8s-version-330914    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-829lp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-old-k8s-version-330914             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node old-k8s-version-330914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s                kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s                kubelet          Node old-k8s-version-330914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s                kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node old-k8s-version-330914 event: Registered Node old-k8s-version-330914 in Controller
	  Normal  NodeReady                11s                kubelet          Node old-k8s-version-330914 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [9b070d53540ddf385a78ef5779c2f89bf263f5c34eeafac7ebcc7733c0f0c365] <==
	{"level":"info","ts":"2025-10-26T15:10:38.609972Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-26T15:10:38.610085Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T15:10:38.610116Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T15:10:39.59664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-26T15:10:39.596686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-26T15:10:39.596751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-26T15:10:39.596767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-26T15:10:39.596776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-26T15:10:39.596784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-26T15:10:39.596792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-26T15:10:39.597724Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:10:39.597907Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-330914 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T15:10:39.597936Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:10:39.59791Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:10:39.598207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T15:10:39.598273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T15:10:39.598375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:10:39.598475Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:10:39.5985Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:10:39.599366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T15:10:39.599779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-26T15:10:42.968437Z","caller":"traceutil/trace.go:171","msg":"trace[1441633338] transaction","detail":"{read_only:false; response_revision:242; number_of_response:1; }","duration":"156.831232ms","start":"2025-10-26T15:10:42.811575Z","end":"2025-10-26T15:10:42.968407Z","steps":["trace[1441633338] 'process raft request'  (duration: 94.216919ms)","trace[1441633338] 'compare'  (duration: 62.508031ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:10:43.03238Z","caller":"traceutil/trace.go:171","msg":"trace[1024914412] transaction","detail":"{read_only:false; response_revision:243; number_of_response:1; }","duration":"219.269276ms","start":"2025-10-26T15:10:42.813094Z","end":"2025-10-26T15:10:43.032363Z","steps":["trace[1024914412] 'process raft request'  (duration: 219.166507ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:10:43.299451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.579883ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596637394339378 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" value_size:129 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T15:10:43.299559Z","caller":"traceutil/trace.go:171","msg":"trace[767463708] transaction","detail":"{read_only:false; response_revision:244; number_of_response:1; }","duration":"259.163737ms","start":"2025-10-26T15:10:43.040376Z","end":"2025-10-26T15:10:43.29954Z","steps":["trace[767463708] 'process raft request'  (duration: 121.049184ms)","trace[767463708] 'compare'  (duration: 137.444743ms)"],"step_count":2}
	
	
	==> kernel <==
	 15:11:20 up  2:53,  0 user,  load average: 2.64, 2.45, 1.61
	Linux old-k8s-version-330914 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [813b3d201beb3dc2b8fe9e2bd3433c57d95e644c059d0f4863a4e2a46feb8222] <==
	I1026 15:10:58.863002       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:10:58.863274       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:10:58.863419       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:10:58.863433       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:10:58.863443       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:10:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:10:59.157881       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:10:59.157944       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:10:59.157963       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:10:59.158148       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:10:59.458223       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:10:59.458255       1 metrics.go:72] Registering metrics
	I1026 15:10:59.458324       1 controller.go:711] "Syncing nftables rules"
	I1026 15:11:09.158212       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:11:09.158273       1 main.go:301] handling current node
	I1026 15:11:19.158754       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:11:19.158815       1 main.go:301] handling current node
	
	
	==> kube-apiserver [320f6f21c9f95e2f6a8e3c09014522c75779b1ac9a8d2cb48f3701f9f0fa3a30] <==
	I1026 15:10:40.749852       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:10:40.749860       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:10:40.750006       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 15:10:40.750045       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 15:10:40.750077       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:10:40.750211       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 15:10:40.750619       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 15:10:40.751782       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 15:10:40.771467       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:10:40.777212       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 15:10:41.656971       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:10:41.661110       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:10:41.661129       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:10:42.158915       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:10:42.197126       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:10:42.261339       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:10:42.267926       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1026 15:10:42.269028       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 15:10:42.273938       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:10:42.702755       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 15:10:43.734006       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 15:10:43.753216       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:10:43.770502       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1026 15:10:56.366418       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1026 15:10:56.471819       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [fa00841ff06b06aedcd7bdf5f64dd8911d954367950c22a2adf254e01a008080] <==
	I1026 15:10:55.636430       1 shared_informer.go:318] Caches are synced for attach detach
	I1026 15:10:55.655772       1 shared_informer.go:318] Caches are synced for persistent volume
	I1026 15:10:55.658979       1 shared_informer.go:318] Caches are synced for PV protection
	I1026 15:10:55.675751       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1026 15:10:55.976283       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:10:56.012032       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:10:56.012085       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 15:10:56.370512       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1026 15:10:56.495775       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b8hhx"
	I1026 15:10:56.495811       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-829lp"
	I1026 15:10:56.576660       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-tpgw7"
	I1026 15:10:56.586211       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hzjqn"
	I1026 15:10:56.595388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="224.950148ms"
	I1026 15:10:56.604655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.206796ms"
	I1026 15:10:56.604812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.019µs"
	I1026 15:10:56.843029       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1026 15:10:56.861732       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-tpgw7"
	I1026 15:10:56.876041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.609646ms"
	I1026 15:10:56.885977       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.879963ms"
	I1026 15:10:56.886262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="143.843µs"
	I1026 15:11:09.335940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.715µs"
	I1026 15:11:09.348038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="474.138µs"
	I1026 15:11:09.940497       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.567275ms"
	I1026 15:11:09.940660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.367µs"
	I1026 15:11:10.514644       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [6e0f992c119e56f6b5947362e8448b67e801fe040f7a0d06445e0efa86deedfc] <==
	I1026 15:10:56.943543       1 server_others.go:69] "Using iptables proxy"
	I1026 15:10:56.956909       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1026 15:10:56.977594       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:10:56.980188       1 server_others.go:152] "Using iptables Proxier"
	I1026 15:10:56.980239       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 15:10:56.980250       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 15:10:56.980292       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 15:10:56.980632       1 server.go:846] "Version info" version="v1.28.0"
	I1026 15:10:56.980649       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:10:56.981299       1 config.go:97] "Starting endpoint slice config controller"
	I1026 15:10:56.981369       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 15:10:56.981493       1 config.go:188] "Starting service config controller"
	I1026 15:10:56.981535       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 15:10:56.981587       1 config.go:315] "Starting node config controller"
	I1026 15:10:56.981615       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 15:10:57.081859       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 15:10:57.081885       1 shared_informer.go:318] Caches are synced for service config
	I1026 15:10:57.081902       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [278ae65f9a83848185716fdfc7ad2c359dff3d98c7984b17d6022f97d5f189db] <==
	W1026 15:10:40.726612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 15:10:40.726780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1026 15:10:41.554766       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 15:10:41.554806       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1026 15:10:41.566464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 15:10:41.566501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1026 15:10:41.583247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 15:10:41.583365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 15:10:41.591549       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 15:10:41.591585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1026 15:10:41.760207       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 15:10:41.760247       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:10:41.779846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 15:10:41.779893       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 15:10:41.810444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 15:10:41.810473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1026 15:10:41.833728       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 15:10:41.833777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 15:10:41.914295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 15:10:41.914346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1026 15:10:41.941267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 15:10:41.941306       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1026 15:10:41.999505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 15:10:41.999553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1026 15:10:43.617035       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 15:10:55 old-k8s-version-330914 kubelet[1381]: I1026 15:10:55.502805    1381 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.509085    1381 topology_manager.go:215] "Topology Admit Handler" podUID="b212cf79-e2d5-49ef-9e66-80ffcd18774f" podNamespace="kube-system" podName="kube-proxy-829lp"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.509575    1381 topology_manager.go:215] "Topology Admit Handler" podUID="522edddb-fb4b-4e11-a49f-48843f236bab" podNamespace="kube-system" podName="kindnet-b8hhx"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.587615    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/522edddb-fb4b-4e11-a49f-48843f236bab-cni-cfg\") pod \"kindnet-b8hhx\" (UID: \"522edddb-fb4b-4e11-a49f-48843f236bab\") " pod="kube-system/kindnet-b8hhx"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.587683    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gq2n\" (UniqueName: \"kubernetes.io/projected/522edddb-fb4b-4e11-a49f-48843f236bab-kube-api-access-2gq2n\") pod \"kindnet-b8hhx\" (UID: \"522edddb-fb4b-4e11-a49f-48843f236bab\") " pod="kube-system/kindnet-b8hhx"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.587722    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b212cf79-e2d5-49ef-9e66-80ffcd18774f-kube-proxy\") pod \"kube-proxy-829lp\" (UID: \"b212cf79-e2d5-49ef-9e66-80ffcd18774f\") " pod="kube-system/kube-proxy-829lp"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.587750    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b212cf79-e2d5-49ef-9e66-80ffcd18774f-lib-modules\") pod \"kube-proxy-829lp\" (UID: \"b212cf79-e2d5-49ef-9e66-80ffcd18774f\") " pod="kube-system/kube-proxy-829lp"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.587790    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrzw9\" (UniqueName: \"kubernetes.io/projected/b212cf79-e2d5-49ef-9e66-80ffcd18774f-kube-api-access-lrzw9\") pod \"kube-proxy-829lp\" (UID: \"b212cf79-e2d5-49ef-9e66-80ffcd18774f\") " pod="kube-system/kube-proxy-829lp"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.587819    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/522edddb-fb4b-4e11-a49f-48843f236bab-xtables-lock\") pod \"kindnet-b8hhx\" (UID: \"522edddb-fb4b-4e11-a49f-48843f236bab\") " pod="kube-system/kindnet-b8hhx"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.587848    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/522edddb-fb4b-4e11-a49f-48843f236bab-lib-modules\") pod \"kindnet-b8hhx\" (UID: \"522edddb-fb4b-4e11-a49f-48843f236bab\") " pod="kube-system/kindnet-b8hhx"
	Oct 26 15:10:56 old-k8s-version-330914 kubelet[1381]: I1026 15:10:56.587881    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b212cf79-e2d5-49ef-9e66-80ffcd18774f-xtables-lock\") pod \"kube-proxy-829lp\" (UID: \"b212cf79-e2d5-49ef-9e66-80ffcd18774f\") " pod="kube-system/kube-proxy-829lp"
	Oct 26 15:10:57 old-k8s-version-330914 kubelet[1381]: I1026 15:10:57.903728    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-829lp" podStartSLOduration=1.903673638 podCreationTimestamp="2025-10-26 15:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:10:57.903586286 +0000 UTC m=+14.211946295" watchObservedRunningTime="2025-10-26 15:10:57.903673638 +0000 UTC m=+14.212033647"
	Oct 26 15:10:58 old-k8s-version-330914 kubelet[1381]: I1026 15:10:58.899383    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-b8hhx" podStartSLOduration=1.103130596 podCreationTimestamp="2025-10-26 15:10:56 +0000 UTC" firstStartedPulling="2025-10-26 15:10:56.82854328 +0000 UTC m=+13.136903283" lastFinishedPulling="2025-10-26 15:10:58.624742181 +0000 UTC m=+14.933102186" observedRunningTime="2025-10-26 15:10:58.899207153 +0000 UTC m=+15.207567163" watchObservedRunningTime="2025-10-26 15:10:58.899329499 +0000 UTC m=+15.207689510"
	Oct 26 15:11:09 old-k8s-version-330914 kubelet[1381]: I1026 15:11:09.312155    1381 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 26 15:11:09 old-k8s-version-330914 kubelet[1381]: I1026 15:11:09.336131    1381 topology_manager.go:215] "Topology Admit Handler" podUID="21211baf-4153-41c8-aacc-6d313dcdef82" podNamespace="kube-system" podName="coredns-5dd5756b68-hzjqn"
	Oct 26 15:11:09 old-k8s-version-330914 kubelet[1381]: I1026 15:11:09.336361    1381 topology_manager.go:215] "Topology Admit Handler" podUID="d505b114-6834-4c0b-858b-a785482ca1ec" podNamespace="kube-system" podName="storage-provisioner"
	Oct 26 15:11:09 old-k8s-version-330914 kubelet[1381]: I1026 15:11:09.380816    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q822\" (UniqueName: \"kubernetes.io/projected/d505b114-6834-4c0b-858b-a785482ca1ec-kube-api-access-9q822\") pod \"storage-provisioner\" (UID: \"d505b114-6834-4c0b-858b-a785482ca1ec\") " pod="kube-system/storage-provisioner"
	Oct 26 15:11:09 old-k8s-version-330914 kubelet[1381]: I1026 15:11:09.380875    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21211baf-4153-41c8-aacc-6d313dcdef82-config-volume\") pod \"coredns-5dd5756b68-hzjqn\" (UID: \"21211baf-4153-41c8-aacc-6d313dcdef82\") " pod="kube-system/coredns-5dd5756b68-hzjqn"
	Oct 26 15:11:09 old-k8s-version-330914 kubelet[1381]: I1026 15:11:09.380901    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xv4\" (UniqueName: \"kubernetes.io/projected/21211baf-4153-41c8-aacc-6d313dcdef82-kube-api-access-99xv4\") pod \"coredns-5dd5756b68-hzjqn\" (UID: \"21211baf-4153-41c8-aacc-6d313dcdef82\") " pod="kube-system/coredns-5dd5756b68-hzjqn"
	Oct 26 15:11:09 old-k8s-version-330914 kubelet[1381]: I1026 15:11:09.381044    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d505b114-6834-4c0b-858b-a785482ca1ec-tmp\") pod \"storage-provisioner\" (UID: \"d505b114-6834-4c0b-858b-a785482ca1ec\") " pod="kube-system/storage-provisioner"
	Oct 26 15:11:09 old-k8s-version-330914 kubelet[1381]: I1026 15:11:09.923001    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.922946345 podCreationTimestamp="2025-10-26 15:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:11:09.922667451 +0000 UTC m=+26.231027459" watchObservedRunningTime="2025-10-26 15:11:09.922946345 +0000 UTC m=+26.231306354"
	Oct 26 15:11:09 old-k8s-version-330914 kubelet[1381]: I1026 15:11:09.932879    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hzjqn" podStartSLOduration=13.932826146 podCreationTimestamp="2025-10-26 15:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:11:09.932571574 +0000 UTC m=+26.240931594" watchObservedRunningTime="2025-10-26 15:11:09.932826146 +0000 UTC m=+26.241186157"
	Oct 26 15:11:11 old-k8s-version-330914 kubelet[1381]: I1026 15:11:11.941551    1381 topology_manager.go:215] "Topology Admit Handler" podUID="fe9e7662-687b-457e-a57c-49441e024bbe" podNamespace="default" podName="busybox"
	Oct 26 15:11:11 old-k8s-version-330914 kubelet[1381]: I1026 15:11:11.997731    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvd5p\" (UniqueName: \"kubernetes.io/projected/fe9e7662-687b-457e-a57c-49441e024bbe-kube-api-access-dvd5p\") pod \"busybox\" (UID: \"fe9e7662-687b-457e-a57c-49441e024bbe\") " pod="default/busybox"
	Oct 26 15:11:13 old-k8s-version-330914 kubelet[1381]: I1026 15:11:13.932250    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.213148719 podCreationTimestamp="2025-10-26 15:11:11 +0000 UTC" firstStartedPulling="2025-10-26 15:11:12.263581363 +0000 UTC m=+28.571941365" lastFinishedPulling="2025-10-26 15:11:12.98264323 +0000 UTC m=+29.291003222" observedRunningTime="2025-10-26 15:11:13.931725855 +0000 UTC m=+30.240085864" watchObservedRunningTime="2025-10-26 15:11:13.932210576 +0000 UTC m=+30.240570581"
	
	
	==> storage-provisioner [f77b0f4bdf929513b5b4bd73ea00a6272ea5b6794be1fb7af4d07ecdee11258a] <==
	I1026 15:11:09.702770       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:11:09.712180       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:11:09.712237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 15:11:09.719617       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:11:09.719736       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6451c6cc-4615-4622-b59c-d1296145dee3", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-330914_30f77a10-bab6-4cf9-adba-fcc2bb5b0a96 became leader
	I1026 15:11:09.719800       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330914_30f77a10-bab6-4cf9-adba-fcc2bb5b0a96!
	I1026 15:11:09.820550       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330914_30f77a10-bab6-4cf9-adba-fcc2bb5b0a96!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330914 -n old-k8s-version-330914
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-330914 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-475081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-475081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (271.981927ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:11:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-475081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-475081 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-475081 describe deploy/metrics-server -n kube-system: exit status 1 (69.60221ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-475081 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-475081
helpers_test.go:243: (dbg) docker inspect no-preload-475081:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f",
	        "Created": "2025-10-26T15:10:28.066508779Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1075809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:10:28.107803382Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/hosts",
	        "LogPath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f-json.log",
	        "Name": "/no-preload-475081",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-475081:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-475081",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f",
	                "LowerDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-475081",
	                "Source": "/var/lib/docker/volumes/no-preload-475081/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-475081",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-475081",
	                "name.minikube.sigs.k8s.io": "no-preload-475081",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dd0cebe78d1ffd48aa035e5b55dbdd9507bee210d00f271107548e859d48773f",
	            "SandboxKey": "/var/run/docker/netns/dd0cebe78d1f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33828"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-475081": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:8f:a6:7f:a2:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da1bd6d7ce5203f11d1c54a9875cb6a6358a5bc321289fcb416f235a12121f07",
	                    "EndpointID": "0687fb4e753bdd844942e3aaa3e5058a8336d38b8ec3061833c1e3c6f7ad0b05",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-475081",
	                        "5e55f49a3db7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475081 -n no-preload-475081
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-475081 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-475081 logs -n 25: (1.120440667s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-498531 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ ssh     │ -p cilium-498531 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ ssh     │ -p cilium-498531 sudo crio config                                                                                                                                                                                                             │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ delete  │ -p cilium-498531                                                                                                                                                                                                                              │ cilium-498531             │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ start   │ -p cert-expiration-619245 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-619245    │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ start   │ -p NoKubernetes-917490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ start   │ -p force-systemd-flag-391593 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-391593 │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ delete  │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ start   │ -p NoKubernetes-917490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ ssh     │ -p NoKubernetes-917490 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │                     │
	│ ssh     │ force-systemd-flag-391593 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-391593 │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:09 UTC │
	│ delete  │ -p force-systemd-flag-391593                                                                                                                                                                                                                  │ force-systemd-flag-391593 │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p cert-options-124833 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ stop    │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p NoKubernetes-917490 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ ssh     │ -p NoKubernetes-917490 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ delete  │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ ssh     │ cert-options-124833 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ ssh     │ -p cert-options-124833 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ delete  │ -p cert-options-124833                                                                                                                                                                                                                        │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-330914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ stop    │ -p old-k8s-version-330914 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-475081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:10:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:10:27.003560 1074625 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:10:27.003818 1074625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:10:27.003825 1074625 out.go:374] Setting ErrFile to fd 2...
	I1026 15:10:27.003829 1074625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:10:27.004048 1074625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:10:27.004549 1074625 out.go:368] Setting JSON to false
	I1026 15:10:27.005877 1074625 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10375,"bootTime":1761481052,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:10:27.005989 1074625 start.go:141] virtualization: kvm guest
	I1026 15:10:27.008185 1074625 out.go:179] * [no-preload-475081] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:10:27.010340 1074625 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:10:27.010376 1074625 notify.go:220] Checking for updates...
	I1026 15:10:27.014512 1074625 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:10:27.015757 1074625 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:10:27.017721 1074625 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:10:27.019483 1074625 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:10:27.020699 1074625 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:10:27.022421 1074625 config.go:182] Loaded profile config "cert-expiration-619245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:10:27.022573 1074625 config.go:182] Loaded profile config "kubernetes-upgrade-176599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:10:27.022728 1074625 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:10:27.022869 1074625 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:10:27.050966 1074625 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:10:27.051052 1074625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:10:27.125497 1074625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:10:27.112696147 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:10:27.125606 1074625 docker.go:318] overlay module found
	I1026 15:10:27.128060 1074625 out.go:179] * Using the docker driver based on user configuration
	I1026 15:10:27.129256 1074625 start.go:305] selected driver: docker
	I1026 15:10:27.129276 1074625 start.go:925] validating driver "docker" against <nil>
	I1026 15:10:27.129293 1074625 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:10:27.130033 1074625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:10:27.214730 1074625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:10:27.202343666 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:10:27.214951 1074625 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:10:27.215216 1074625 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:10:27.217420 1074625 out.go:179] * Using Docker driver with root privileges
	I1026 15:10:27.218781 1074625 cni.go:84] Creating CNI manager for ""
	I1026 15:10:27.218883 1074625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:27.218896 1074625 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:10:27.219022 1074625 start.go:349] cluster config:
	{Name:no-preload-475081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:10:27.220680 1074625 out.go:179] * Starting "no-preload-475081" primary control-plane node in "no-preload-475081" cluster
	I1026 15:10:27.222124 1074625 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:10:27.223550 1074625 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:10:27.224785 1074625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:10:27.224907 1074625 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:10:27.224980 1074625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/config.json ...
	I1026 15:10:27.225021 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/config.json: {Name:mk4b3cf580b49d6ad576694b31a852b8c72157a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:27.225130 1074625 cache.go:107] acquiring lock: {Name:mk937f429b3d3636ff8775b90e16c023489c7adf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225127 1074625 cache.go:107] acquiring lock: {Name:mk542564d39af87b00a1863120bb08cf008fe7c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225252 1074625 cache.go:107] acquiring lock: {Name:mk1536b2f10db5b203b98b8484729c964c7ca6e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225269 1074625 cache.go:115] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1026 15:10:27.225279 1074625 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 173.267µs
	I1026 15:10:27.225290 1074625 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1026 15:10:27.225290 1074625 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:27.225340 1074625 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:27.225331 1074625 cache.go:107] acquiring lock: {Name:mkc179cf4029d6736ce61dbfad39b348fc2c96b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225304 1074625 cache.go:107] acquiring lock: {Name:mk59c1a44c70bc7e7856311c44a1559489b29c53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225414 1074625 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:27.225466 1074625 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 15:10:27.225592 1074625 cache.go:107] acquiring lock: {Name:mk6b4452625dc58192fa1eb2696a2e362bd1db25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225616 1074625 cache.go:107] acquiring lock: {Name:mk4c631399a8aca700734c5e2f0c2f2d3de52916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225677 1074625 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:27.225691 1074625 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:27.225592 1074625 cache.go:107] acquiring lock: {Name:mkf66b984302bba364c4bdc743639502359ea174 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.225954 1074625 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:27.227598 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:27.227915 1074625 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:27.228568 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:27.229213 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:27.229505 1074625 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:27.230336 1074625 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 15:10:27.230665 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:27.265294 1074625 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:10:27.265319 1074625 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:10:27.265335 1074625 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:10:27.265367 1074625 start.go:360] acquireMachinesLock for no-preload-475081: {Name:mk9c0a34e6930824c553b7de78574fec03de3709 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:10:27.265470 1074625 start.go:364] duration metric: took 84.128µs to acquireMachinesLock for "no-preload-475081"
	I1026 15:10:27.265501 1074625 start.go:93] Provisioning new machine with config: &{Name:no-preload-475081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:10:27.265579 1074625 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:10:26.360896 1072816 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-330914:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.377121956s)
	I1026 15:10:26.360941 1072816 kic.go:203] duration metric: took 5.377327615s to extract preloaded images to volume ...
	W1026 15:10:26.361069 1072816 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:10:26.361100 1072816 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:10:26.361199 1072816 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:10:26.422361 1072816 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-330914 --name old-k8s-version-330914 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-330914 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-330914 --network old-k8s-version-330914 --ip 192.168.85.2 --volume old-k8s-version-330914:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:10:26.993923 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Running}}
	I1026 15:10:27.014269 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:27.038324 1072816 cli_runner.go:164] Run: docker exec old-k8s-version-330914 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:10:27.101250 1072816 oci.go:144] the created container "old-k8s-version-330914" has a running status.
	I1026 15:10:27.101302 1072816 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa...
	I1026 15:10:27.526543 1072816 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:10:27.558794 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:27.583723 1072816 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:10:27.583747 1072816 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-330914 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:10:27.645295 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:27.670009 1072816 machine.go:93] provisionDockerMachine start ...
	I1026 15:10:27.670125 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:27.699840 1072816 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:27.700217 1072816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:10:27.700235 1072816 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:10:27.865463 1072816 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330914
	
	I1026 15:10:27.865494 1072816 ubuntu.go:182] provisioning hostname "old-k8s-version-330914"
	I1026 15:10:27.865574 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:27.891354 1072816 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:27.891728 1072816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:10:27.891780 1072816 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-330914 && echo "old-k8s-version-330914" | sudo tee /etc/hostname
	I1026 15:10:28.066183 1072816 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-330914
	
	I1026 15:10:28.066264 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:28.087012 1072816 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:28.087357 1072816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:10:28.087388 1072816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-330914' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-330914/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-330914' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:10:28.240673 1072816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:10:28.240705 1072816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:10:28.240729 1072816 ubuntu.go:190] setting up certificates
	I1026 15:10:28.240742 1072816 provision.go:84] configureAuth start
	I1026 15:10:28.240822 1072816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330914
	I1026 15:10:28.261303 1072816 provision.go:143] copyHostCerts
	I1026 15:10:28.261390 1072816 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:10:28.261408 1072816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:10:28.261502 1072816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:10:28.261641 1072816 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:10:28.261654 1072816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:10:28.261696 1072816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:10:28.261802 1072816 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:10:28.261822 1072816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:10:28.261854 1072816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:10:28.261927 1072816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-330914 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-330914]
	I1026 15:10:29.304631 1072816 provision.go:177] copyRemoteCerts
	I1026 15:10:29.304693 1072816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:10:29.304733 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:29.323888 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:29.426493 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:10:29.447261 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 15:10:29.466016 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:10:29.484108 1072816 provision.go:87] duration metric: took 1.243350184s to configureAuth
	I1026 15:10:29.484143 1072816 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:10:29.484345 1072816 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:10:29.484441 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:29.502386 1072816 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.502618 1072816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1026 15:10:29.502635 1072816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:10:29.771907 1072816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:10:29.771947 1072816 machine.go:96] duration metric: took 2.101915141s to provisionDockerMachine
	I1026 15:10:29.771960 1072816 client.go:171] duration metric: took 9.386031718s to LocalClient.Create
	I1026 15:10:29.771985 1072816 start.go:167] duration metric: took 9.38611743s to libmachine.API.Create "old-k8s-version-330914"
	I1026 15:10:29.771995 1072816 start.go:293] postStartSetup for "old-k8s-version-330914" (driver="docker")
	I1026 15:10:29.772014 1072816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:10:29.772082 1072816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:10:29.772136 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:29.793479 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:29.897024 1072816 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:10:29.900786 1072816 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:10:29.900822 1072816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:10:29.900835 1072816 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:10:29.900896 1072816 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:10:29.901002 1072816 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:10:29.901123 1072816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:10:29.909506 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:10:29.930476 1072816 start.go:296] duration metric: took 158.461324ms for postStartSetup
	I1026 15:10:29.930859 1072816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330914
	I1026 15:10:29.949760 1072816 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/config.json ...
	I1026 15:10:29.950092 1072816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:10:29.950153 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:29.968661 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:30.067271 1072816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:10:30.072081 1072816 start.go:128] duration metric: took 9.689671202s to createHost
	I1026 15:10:30.072108 1072816 start.go:83] releasing machines lock for "old-k8s-version-330914", held for 9.689845414s
	I1026 15:10:30.072193 1072816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-330914
	I1026 15:10:30.090512 1072816 ssh_runner.go:195] Run: cat /version.json
	I1026 15:10:30.090559 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:30.090592 1072816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:10:30.090680 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:30.112513 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:30.112682 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:27.048230 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:27.048667 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:27.048729 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:27.048799 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:27.087066 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:27.087109 1030092 cri.go:89] found id: ""
	I1026 15:10:27.087118 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:27.087221 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:27.093044 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:27.093117 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:27.130720 1030092 cri.go:89] found id: ""
	I1026 15:10:27.130749 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.130758 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:27.130767 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:27.130821 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:27.178192 1030092 cri.go:89] found id: ""
	I1026 15:10:27.178223 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.178236 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:27.178259 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:27.178320 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:27.214200 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:27.214224 1030092 cri.go:89] found id: ""
	I1026 15:10:27.214234 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:27.214294 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:27.218845 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:27.218925 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:27.264658 1030092 cri.go:89] found id: ""
	I1026 15:10:27.264774 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.264817 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:27.264839 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:27.264932 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:27.302943 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:27.302970 1030092 cri.go:89] found id: ""
	I1026 15:10:27.302981 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:27.303047 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:27.308381 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:27.308459 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:27.348596 1030092 cri.go:89] found id: ""
	I1026 15:10:27.348628 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.348640 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:27.348648 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:27.348714 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:27.390272 1030092 cri.go:89] found id: ""
	I1026 15:10:27.390309 1030092 logs.go:282] 0 containers: []
	W1026 15:10:27.390322 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:27.390336 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:27.390353 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:27.516762 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:27.516810 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:27.539323 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:27.539397 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:27.624275 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:27.624301 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:27.624332 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:27.669436 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:27.669486 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:27.746321 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:27.746374 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:27.786108 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:27.786147 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:27.852417 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:27.852454 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:30.399246 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:30.399728 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:30.399804 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:30.399866 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:30.433274 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:30.433299 1030092 cri.go:89] found id: ""
	I1026 15:10:30.433309 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:30.433371 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:30.437616 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:30.437692 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:30.474672 1030092 cri.go:89] found id: ""
	I1026 15:10:30.474702 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.474714 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:30.474722 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:30.474785 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:30.504326 1030092 cri.go:89] found id: ""
	I1026 15:10:30.504355 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.504365 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:30.504372 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:30.504431 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:30.533893 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:30.533915 1030092 cri.go:89] found id: ""
	I1026 15:10:30.533925 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:30.533990 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:30.538178 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:30.538245 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:30.274422 1072816 ssh_runner.go:195] Run: systemctl --version
	I1026 15:10:30.281613 1072816 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:10:30.317620 1072816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:10:30.322634 1072816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:10:30.322707 1072816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:10:30.349963 1072816 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:10:30.349989 1072816 start.go:495] detecting cgroup driver to use...
	I1026 15:10:30.350027 1072816 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:10:30.350082 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:10:30.368616 1072816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:10:30.381569 1072816 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:10:30.381641 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:10:30.400357 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:10:30.424175 1072816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:10:30.519660 1072816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:10:30.635087 1072816 docker.go:234] disabling docker service ...
	I1026 15:10:30.635244 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:10:30.657718 1072816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:10:30.672474 1072816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:10:30.773152 1072816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:10:30.876905 1072816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:10:30.890212 1072816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:10:30.906513 1072816 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 15:10:30.906597 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.917933 1072816 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:10:30.918007 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.928494 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.939454 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.949573 1072816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:10:30.963304 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.974847 1072816 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:30.989632 1072816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.001245 1072816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:10:31.010372 1072816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:10:31.018448 1072816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:31.115050 1072816 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:10:31.234496 1072816 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:10:31.234569 1072816 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:10:31.239037 1072816 start.go:563] Will wait 60s for crictl version
	I1026 15:10:31.239106 1072816 ssh_runner.go:195] Run: which crictl
	I1026 15:10:31.242887 1072816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:10:31.271736 1072816 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:10:31.271827 1072816 ssh_runner.go:195] Run: crio --version
	I1026 15:10:31.301698 1072816 ssh_runner.go:195] Run: crio --version
	I1026 15:10:31.339075 1072816 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1026 15:10:27.275444 1074625 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:10:27.275935 1074625 start.go:159] libmachine.API.Create for "no-preload-475081" (driver="docker")
	I1026 15:10:27.275971 1074625 client.go:168] LocalClient.Create starting
	I1026 15:10:27.276058 1074625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:10:27.276109 1074625 main.go:141] libmachine: Decoding PEM data...
	I1026 15:10:27.276128 1074625 main.go:141] libmachine: Parsing certificate...
	I1026 15:10:27.276437 1074625 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:10:27.276502 1074625 main.go:141] libmachine: Decoding PEM data...
	I1026 15:10:27.276525 1074625 main.go:141] libmachine: Parsing certificate...
	I1026 15:10:27.277147 1074625 cli_runner.go:164] Run: docker network inspect no-preload-475081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:10:27.299628 1074625 cli_runner.go:211] docker network inspect no-preload-475081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:10:27.299706 1074625 network_create.go:284] running [docker network inspect no-preload-475081] to gather additional debugging logs...
	I1026 15:10:27.299726 1074625 cli_runner.go:164] Run: docker network inspect no-preload-475081
	W1026 15:10:27.322313 1074625 cli_runner.go:211] docker network inspect no-preload-475081 returned with exit code 1
	I1026 15:10:27.322351 1074625 network_create.go:287] error running [docker network inspect no-preload-475081]: docker network inspect no-preload-475081: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-475081 not found
	I1026 15:10:27.322367 1074625 network_create.go:289] output of [docker network inspect no-preload-475081]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-475081 not found
	
	** /stderr **
	I1026 15:10:27.322497 1074625 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:10:27.356838 1074625 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:10:27.357897 1074625 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:10:27.358843 1074625 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:10:27.360770 1074625 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d6289da05fd0 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:46:02:30:47:06} reservation:<nil>}
	I1026 15:10:27.361640 1074625 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-56ce3fb526f5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:2e:3a:ce:5d:57:e6} reservation:<nil>}
	I1026 15:10:27.362139 1074625 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-d4e229d938e3 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:7e:a1:7f:77:f5:d3} reservation:<nil>}
	I1026 15:10:27.363023 1074625 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e03b50}
	I1026 15:10:27.363058 1074625 network_create.go:124] attempt to create docker network no-preload-475081 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1026 15:10:27.363129 1074625 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-475081 no-preload-475081
	I1026 15:10:27.393001 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1026 15:10:27.393909 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 15:10:27.399998 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 15:10:27.403806 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 15:10:27.420497 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1026 15:10:27.436085 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 15:10:27.453460 1074625 network_create.go:108] docker network no-preload-475081 192.168.103.0/24 created
	I1026 15:10:27.453495 1074625 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-475081" container
	I1026 15:10:27.453693 1074625 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:10:27.467498 1074625 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 15:10:27.477472 1074625 cli_runner.go:164] Run: docker volume create no-preload-475081 --label name.minikube.sigs.k8s.io=no-preload-475081 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:10:27.500283 1074625 oci.go:103] Successfully created a docker volume no-preload-475081
	I1026 15:10:27.500373 1074625 cli_runner.go:164] Run: docker run --rm --name no-preload-475081-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-475081 --entrypoint /usr/bin/test -v no-preload-475081:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:10:27.512623 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1026 15:10:27.512655 1074625 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 287.350752ms
	I1026 15:10:27.512672 1074625 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1026 15:10:27.783597 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1026 15:10:27.783629 1074625 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 558.379389ms
	I1026 15:10:27.783646 1074625 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1026 15:10:27.988793 1074625 oci.go:107] Successfully prepared a docker volume no-preload-475081
	I1026 15:10:27.988828 1074625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1026 15:10:27.988938 1074625 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:10:27.989006 1074625 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:10:27.989068 1074625 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:10:28.048969 1074625 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-475081 --name no-preload-475081 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-475081 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-475081 --network no-preload-475081 --ip 192.168.103.2 --volume no-preload-475081:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:10:28.342019 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Running}}
	I1026 15:10:28.363454 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:10:28.386659 1074625 cli_runner.go:164] Run: docker exec no-preload-475081 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:10:28.437636 1074625 oci.go:144] the created container "no-preload-475081" has a running status.
	I1026 15:10:28.437669 1074625 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa...
	I1026 15:10:28.812762 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1026 15:10:28.812803 1074625 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.58721495s
	I1026 15:10:28.812820 1074625 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1026 15:10:28.818076 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1026 15:10:28.818116 1074625 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.592498947s
	I1026 15:10:28.818138 1074625 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1026 15:10:28.868837 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1026 15:10:28.868879 1074625 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.643288598s
	I1026 15:10:28.868897 1074625 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1026 15:10:29.042618 1074625 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:10:29.082504 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:10:29.100432 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1026 15:10:29.100466 1074625 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.875358453s
	I1026 15:10:29.100483 1074625 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1026 15:10:29.106955 1074625 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:10:29.106979 1074625 kic_runner.go:114] Args: [docker exec --privileged no-preload-475081 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:10:29.166186 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:10:29.180960 1074625 cache.go:157] /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1026 15:10:29.180994 1074625 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 1.95566247s
	I1026 15:10:29.181009 1074625 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1026 15:10:29.181037 1074625 cache.go:87] Successfully saved all images to host disk.
	I1026 15:10:29.184950 1074625 machine.go:93] provisionDockerMachine start ...
	I1026 15:10:29.185040 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.206028 1074625 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.206346 1074625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:10:29.206364 1074625 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:10:29.352402 1074625 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-475081
	
	I1026 15:10:29.352435 1074625 ubuntu.go:182] provisioning hostname "no-preload-475081"
	I1026 15:10:29.352503 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.372402 1074625 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.372625 1074625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:10:29.372638 1074625 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-475081 && echo "no-preload-475081" | sudo tee /etc/hostname
	I1026 15:10:29.525268 1074625 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-475081
	
	I1026 15:10:29.525363 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.544593 1074625 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.544859 1074625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:10:29.544879 1074625 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-475081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-475081/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-475081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:10:29.690878 1074625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:10:29.690932 1074625 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:10:29.690964 1074625 ubuntu.go:190] setting up certificates
	I1026 15:10:29.690982 1074625 provision.go:84] configureAuth start
	I1026 15:10:29.691077 1074625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-475081
	I1026 15:10:29.712324 1074625 provision.go:143] copyHostCerts
	I1026 15:10:29.712398 1074625 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:10:29.712414 1074625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:10:29.712503 1074625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:10:29.712644 1074625 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:10:29.712656 1074625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:10:29.712693 1074625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:10:29.712856 1074625 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:10:29.712872 1074625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:10:29.712949 1074625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:10:29.713067 1074625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.no-preload-475081 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-475081]
	I1026 15:10:29.762969 1074625 provision.go:177] copyRemoteCerts
	I1026 15:10:29.763031 1074625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:10:29.763072 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.784546 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:29.887382 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:10:29.907584 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:10:29.926465 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:10:29.945955 1074625 provision.go:87] duration metric: took 254.952545ms to configureAuth
	I1026 15:10:29.945997 1074625 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:10:29.946231 1074625 config.go:182] Loaded profile config "no-preload-475081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:10:29.946343 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:29.966380 1074625 main.go:141] libmachine: Using SSH client type: native
	I1026 15:10:29.966651 1074625 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1026 15:10:29.966676 1074625 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:10:30.226858 1074625 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:10:30.226886 1074625 machine.go:96] duration metric: took 1.04191595s to provisionDockerMachine
	I1026 15:10:30.226899 1074625 client.go:171] duration metric: took 2.950920771s to LocalClient.Create
	I1026 15:10:30.226926 1074625 start.go:167] duration metric: took 2.95099448s to libmachine.API.Create "no-preload-475081"
	I1026 15:10:30.226940 1074625 start.go:293] postStartSetup for "no-preload-475081" (driver="docker")
	I1026 15:10:30.226958 1074625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:10:30.227033 1074625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:10:30.227091 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:30.246866 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:30.350765 1074625 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:10:30.354675 1074625 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:10:30.354710 1074625 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:10:30.354722 1074625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:10:30.354796 1074625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:10:30.354937 1074625 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:10:30.355071 1074625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:10:30.364207 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:10:30.386215 1074625 start.go:296] duration metric: took 159.253153ms for postStartSetup
	I1026 15:10:30.386671 1074625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-475081
	I1026 15:10:30.406484 1074625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/config.json ...
	I1026 15:10:30.406800 1074625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:10:30.406913 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:30.427632 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:30.529101 1074625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:10:30.534994 1074625 start.go:128] duration metric: took 3.269395515s to createHost
	I1026 15:10:30.535028 1074625 start.go:83] releasing machines lock for "no-preload-475081", held for 3.269540492s
	I1026 15:10:30.535113 1074625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-475081
	I1026 15:10:30.555083 1074625 ssh_runner.go:195] Run: cat /version.json
	I1026 15:10:30.555106 1074625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:10:30.555144 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:30.555213 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:10:30.582552 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:30.584941 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:10:30.762921 1074625 ssh_runner.go:195] Run: systemctl --version
	I1026 15:10:30.770192 1074625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:10:30.818289 1074625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:10:30.823514 1074625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:10:30.823596 1074625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:10:30.854613 1074625 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:10:30.854645 1074625 start.go:495] detecting cgroup driver to use...
	I1026 15:10:30.854686 1074625 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:10:30.854761 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:10:30.874750 1074625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:10:30.888099 1074625 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:10:30.888186 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:10:30.907974 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:10:30.928050 1074625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:10:31.027762 1074625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:10:31.132139 1074625 docker.go:234] disabling docker service ...
	I1026 15:10:31.132226 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:10:31.161293 1074625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:10:31.176331 1074625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:10:31.276297 1074625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:10:31.370118 1074625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:10:31.383240 1074625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:10:31.399560 1074625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:10:31.399635 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.412319 1074625 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:10:31.412377 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.421601 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.432352 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.441909 1074625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:10:31.450411 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.460586 1074625 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.475668 1074625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:10:31.485882 1074625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:10:31.494128 1074625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:10:31.502474 1074625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:31.591232 1074625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:10:31.719792 1074625 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:10:31.719873 1074625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:10:31.724366 1074625 start.go:563] Will wait 60s for crictl version
	I1026 15:10:31.724440 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:31.729831 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:10:31.758490 1074625 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:10:31.758580 1074625 ssh_runner.go:195] Run: crio --version
	I1026 15:10:31.788001 1074625 ssh_runner.go:195] Run: crio --version
	I1026 15:10:31.823230 1074625 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:10:31.824457 1074625 cli_runner.go:164] Run: docker network inspect no-preload-475081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:10:31.843910 1074625 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1026 15:10:31.848229 1074625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:10:31.859047 1074625 kubeadm.go:883] updating cluster {Name:no-preload-475081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:10:31.859184 1074625 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:10:31.859233 1074625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:10:31.887848 1074625 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:10:31.887880 1074625 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1026 15:10:31.887972 1074625 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:31.888028 1074625 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1026 15:10:31.888030 1074625 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:31.888047 1074625 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:31.888001 1074625 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:31.888072 1074625 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:31.888031 1074625 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:31.888069 1074625 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:31.889481 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:31.889483 1074625 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1026 15:10:31.889483 1074625 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:31.889484 1074625 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:31.889574 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:31.889493 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:31.889537 1074625 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:31.889593 1074625 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:31.340253 1072816 cli_runner.go:164] Run: docker network inspect old-k8s-version-330914 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:10:31.359222 1072816 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 15:10:31.363637 1072816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:10:31.374870 1072816 kubeadm.go:883] updating cluster {Name:old-k8s-version-330914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-330914 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:10:31.375048 1072816 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 15:10:31.375125 1072816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:10:31.408337 1072816 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:10:31.408360 1072816 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:10:31.408404 1072816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:10:31.436796 1072816 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:10:31.436820 1072816 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:10:31.436828 1072816 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1026 15:10:31.436927 1072816 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-330914 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-330914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:10:31.437008 1072816 ssh_runner.go:195] Run: crio config
	I1026 15:10:31.486402 1072816 cni.go:84] Creating CNI manager for ""
	I1026 15:10:31.486426 1072816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:31.486454 1072816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:10:31.486488 1072816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-330914 NodeName:old-k8s-version-330914 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:10:31.486651 1072816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-330914"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:10:31.486727 1072816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1026 15:10:31.495375 1072816 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:10:31.495444 1072816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:10:31.503950 1072816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 15:10:31.517866 1072816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:10:31.540318 1072816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1026 15:10:31.554519 1072816 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:10:31.558593 1072816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:10:31.570183 1072816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:31.655010 1072816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:10:31.683056 1072816 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914 for IP: 192.168.85.2
	I1026 15:10:31.683080 1072816 certs.go:195] generating shared ca certs ...
	I1026 15:10:31.683101 1072816 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:31.683294 1072816 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:10:31.683368 1072816 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:10:31.683385 1072816 certs.go:257] generating profile certs ...
	I1026 15:10:31.683461 1072816 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.key
	I1026 15:10:31.683482 1072816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt with IP's: []
	I1026 15:10:32.002037 1072816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt ...
	I1026 15:10:32.002073 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt: {Name:mk9eb27b0acc738f8e51fd36dfa2356afc000f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.002303 1072816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.key ...
	I1026 15:10:32.002335 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.key: {Name:mkc7b6d36bb3c2ef946755912241f1454e702242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.002470 1072816 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key.925d69a5
	I1026 15:10:32.002495 1072816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt.925d69a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 15:10:32.225436 1072816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt.925d69a5 ...
	I1026 15:10:32.225464 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt.925d69a5: {Name:mk6a24e6a1f89a2f77ebed52ff44c979d4a184bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.225660 1072816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key.925d69a5 ...
	I1026 15:10:32.225681 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key.925d69a5: {Name:mk3be0046a1baa63d42cce5d152c095adbce996a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.225771 1072816 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt.925d69a5 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt
	I1026 15:10:32.225885 1072816 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key.925d69a5 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key
	I1026 15:10:32.225999 1072816 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.key
	I1026 15:10:32.226027 1072816 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.crt with IP's: []
	I1026 15:10:32.784640 1072816 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.crt ...
	I1026 15:10:32.784675 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.crt: {Name:mk05a5cef04b9cf172f58ba474c236de8669cc02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.784880 1072816 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.key ...
	I1026 15:10:32.784899 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.key: {Name:mk314b1116f184c184a6c31bbb87cdd6071d4a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:32.785132 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:10:32.785204 1072816 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:10:32.785219 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:10:32.785253 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:10:32.785288 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:10:32.785321 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:10:32.785384 1072816 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:10:32.786060 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:10:32.807132 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:10:32.827766 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:10:32.853383 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:10:32.879599 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 15:10:32.905229 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:10:32.932852 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:10:32.959535 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:10:32.984333 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:10:33.008383 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:10:33.027719 1072816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:10:33.046154 1072816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:10:33.059296 1072816 ssh_runner.go:195] Run: openssl version
	I1026 15:10:33.065869 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:10:33.075535 1072816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:10:33.079710 1072816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:10:33.079768 1072816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:10:33.115704 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:10:33.125301 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:10:33.134687 1072816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:10:33.138760 1072816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:10:33.138838 1072816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:10:33.176264 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:10:33.185687 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:10:33.195157 1072816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:33.199515 1072816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:33.199587 1072816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:33.235934 1072816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:10:33.245729 1072816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:10:33.249909 1072816 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:10:33.249965 1072816 kubeadm.go:400] StartCluster: {Name:old-k8s-version-330914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-330914 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:10:33.250054 1072816 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:10:33.250117 1072816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:10:33.279654 1072816 cri.go:89] found id: ""
	I1026 15:10:33.279722 1072816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:10:33.288544 1072816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:10:33.297266 1072816 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:10:33.297331 1072816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:10:33.305971 1072816 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:10:33.305994 1072816 kubeadm.go:157] found existing configuration files:
	
	I1026 15:10:33.306044 1072816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:10:33.314347 1072816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:10:33.314413 1072816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:10:33.322584 1072816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:10:33.332682 1072816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:10:33.332752 1072816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:10:33.342655 1072816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:10:33.353198 1072816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:10:33.353277 1072816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:10:33.362809 1072816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:10:33.372545 1072816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:10:33.372607 1072816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:10:33.381111 1072816 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:10:33.431402 1072816 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1026 15:10:33.431492 1072816 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:10:33.482985 1072816 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:10:33.483075 1072816 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:10:33.483134 1072816 kubeadm.go:318] OS: Linux
	I1026 15:10:33.483223 1072816 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:10:33.483287 1072816 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:10:33.483344 1072816 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:10:33.483414 1072816 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:10:33.483480 1072816 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:10:33.483582 1072816 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:10:33.483666 1072816 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:10:33.483734 1072816 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:10:33.574752 1072816 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:10:33.574915 1072816 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:10:33.575068 1072816 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1026 15:10:33.766004 1072816 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:10:33.770323 1072816 out.go:252]   - Generating certificates and keys ...
	I1026 15:10:33.770439 1072816 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:10:33.770551 1072816 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:10:33.982608 1072816 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:10:34.135530 1072816 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:10:34.213988 1072816 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:10:34.433281 1072816 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:10:34.515682 1072816 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:10:34.515835 1072816 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-330914] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:10:34.675628 1072816 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:10:34.675835 1072816 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-330914] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:10:30.577561 1030092 cri.go:89] found id: ""
	I1026 15:10:30.577593 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.577613 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:30.577622 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:30.577685 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:30.613352 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:30.613381 1030092 cri.go:89] found id: ""
	I1026 15:10:30.613391 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:30.613449 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:30.617851 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:30.617925 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:30.649422 1030092 cri.go:89] found id: ""
	I1026 15:10:30.649459 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.649471 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:30.649480 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:30.649542 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:30.680591 1030092 cri.go:89] found id: ""
	I1026 15:10:30.680623 1030092 logs.go:282] 0 containers: []
	W1026 15:10:30.680633 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:30.680646 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:30.680663 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:30.781047 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:30.781085 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:30.800059 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:30.800092 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:30.872357 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:30.872384 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:30.872402 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:30.910824 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:30.910852 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:30.973301 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:30.973338 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:31.003970 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:31.003998 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:31.079300 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:31.079343 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:33.613645 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:33.614197 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:33.614277 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:33.614343 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:33.652052 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:33.652083 1030092 cri.go:89] found id: ""
	I1026 15:10:33.652093 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:33.652189 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:33.657696 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:33.657779 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:33.695307 1030092 cri.go:89] found id: ""
	I1026 15:10:33.695339 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.695350 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:33.695358 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:33.695424 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:33.731199 1030092 cri.go:89] found id: ""
	I1026 15:10:33.731230 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.731241 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:33.731249 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:33.731311 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:33.764354 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:33.764382 1030092 cri.go:89] found id: ""
	I1026 15:10:33.764393 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:33.764455 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:33.770779 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:33.770849 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:33.809746 1030092 cri.go:89] found id: ""
	I1026 15:10:33.809778 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.809787 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:33.809793 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:33.809856 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:33.847832 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:33.847857 1030092 cri.go:89] found id: ""
	I1026 15:10:33.847869 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:33.847925 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:33.853185 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:33.853259 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:33.888338 1030092 cri.go:89] found id: ""
	I1026 15:10:33.888369 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.888388 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:33.888396 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:33.888459 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:33.924904 1030092 cri.go:89] found id: ""
	I1026 15:10:33.924937 1030092 logs.go:282] 0 containers: []
	W1026 15:10:33.924948 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:33.924959 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:33.924974 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:33.984061 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:33.984105 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:34.015141 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:34.015193 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:34.086409 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:34.086458 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:34.124652 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:34.124684 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:34.225823 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:34.225866 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:34.243681 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:34.243743 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:34.318018 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:34.318050 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:34.318068 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:35.202607 1072816 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:10:35.453616 1072816 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:10:35.831680 1072816 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:10:35.831797 1072816 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:10:35.949112 1072816 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:10:36.147523 1072816 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:10:36.354875 1072816 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:10:36.624459 1072816 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:10:36.625121 1072816 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:10:36.629821 1072816 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:10:32.018681 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.020893 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.033545 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.036389 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.041956 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.059733 1074625 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1026 15:10:32.059802 1074625 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.059857 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.061896 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.066062 1074625 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1026 15:10:32.066119 1074625 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.066182 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.068105 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1026 15:10:32.081045 1074625 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1026 15:10:32.081099 1074625 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.081174 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.084339 1074625 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1026 15:10:32.084388 1074625 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.084441 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.095381 1074625 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1026 15:10:32.095426 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.095437 1074625 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.095486 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.107843 1074625 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1026 15:10:32.107896 1074625 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.107940 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.107939 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.110203 1074625 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1026 15:10:32.110245 1074625 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1026 15:10:32.110261 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.110289 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.110292 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.129103 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.129151 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.129111 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.145406 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.145550 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.146306 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.146421 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:10:32.166820 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1026 15:10:32.175924 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.176028 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.185649 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1026 15:10:32.186033 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1026 15:10:32.187811 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:10:32.190574 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1026 15:10:32.211363 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1026 15:10:32.211467 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:10:32.217831 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1026 15:10:32.220691 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1026 15:10:32.225362 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1026 15:10:32.225750 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:10:32.230699 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1026 15:10:32.230813 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:10:32.238347 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1026 15:10:32.238485 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1026 15:10:32.238525 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1026 15:10:32.238567 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1026 15:10:32.238595 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:10:32.274350 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1026 15:10:32.274383 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1026 15:10:32.274415 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1026 15:10:32.274427 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1026 15:10:32.274438 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1026 15:10:32.274451 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1026 15:10:32.274498 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:10:32.274523 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:10:32.296848 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1026 15:10:32.296907 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1026 15:10:32.296949 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1026 15:10:32.296960 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1026 15:10:32.344105 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1026 15:10:32.344106 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1026 15:10:32.344148 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1026 15:10:32.344177 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1026 15:10:32.345574 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1026 15:10:32.345602 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1026 15:10:32.403676 1074625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:32.440894 1074625 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1026 15:10:32.440967 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1026 15:10:32.497117 1074625 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1026 15:10:32.497206 1074625 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:32.497279 1074625 ssh_runner.go:195] Run: which crictl
	I1026 15:10:32.922489 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:32.922641 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1026 15:10:32.922674 1074625 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:10:32.922726 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1026 15:10:32.964817 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:34.143632 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.220874662s)
	I1026 15:10:34.143671 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1026 15:10:34.143699 1074625 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:10:34.143695 1074625 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.17883613s)
	I1026 15:10:34.143757 1074625 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:34.143761 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1026 15:10:34.177240 1074625 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1026 15:10:34.177364 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:10:35.486100 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.342311581s)
	I1026 15:10:35.486137 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1026 15:10:35.486173 1074625 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.308769695s)
	I1026 15:10:35.486209 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1026 15:10:35.486233 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1026 15:10:35.486179 1074625 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:10:35.486305 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1026 15:10:36.659635 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.173287598s)
	I1026 15:10:36.659671 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1026 15:10:36.659699 1074625 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:10:36.659753 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1026 15:10:36.631017 1072816 out.go:252]   - Booting up control plane ...
	I1026 15:10:36.631125 1072816 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:10:36.631260 1072816 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:10:36.632077 1072816 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:10:36.647824 1072816 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:10:36.649095 1072816 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:10:36.649203 1072816 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:10:36.763431 1072816 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1026 15:10:36.857206 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:36.857668 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:36.857739 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:36.857813 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:36.891242 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:36.891271 1030092 cri.go:89] found id: ""
	I1026 15:10:36.891283 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:36.891346 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:36.895675 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:36.895744 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:36.925613 1030092 cri.go:89] found id: ""
	I1026 15:10:36.925645 1030092 logs.go:282] 0 containers: []
	W1026 15:10:36.925656 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:36.925664 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:36.925736 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:36.955042 1030092 cri.go:89] found id: ""
	I1026 15:10:36.955070 1030092 logs.go:282] 0 containers: []
	W1026 15:10:36.955081 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:36.955088 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:36.955154 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:36.986661 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:36.986687 1030092 cri.go:89] found id: ""
	I1026 15:10:36.986697 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:36.986761 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:36.990955 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:36.991029 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:37.021347 1030092 cri.go:89] found id: ""
	I1026 15:10:37.021375 1030092 logs.go:282] 0 containers: []
	W1026 15:10:37.021386 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:37.021394 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:37.021456 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:37.053092 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:37.053117 1030092 cri.go:89] found id: ""
	I1026 15:10:37.053128 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:37.053228 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:37.057878 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:37.057959 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:37.087829 1030092 cri.go:89] found id: ""
	I1026 15:10:37.087861 1030092 logs.go:282] 0 containers: []
	W1026 15:10:37.087873 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:37.087881 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:37.087938 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:37.118065 1030092 cri.go:89] found id: ""
	I1026 15:10:37.118091 1030092 logs.go:282] 0 containers: []
	W1026 15:10:37.118100 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:37.118110 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:37.118125 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:37.147916 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:37.147949 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:37.210610 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:37.210652 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:37.243472 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:37.243504 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:37.335697 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:37.335736 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:37.352960 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:37.352997 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:37.418139 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:37.418180 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:37.418199 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:37.452738 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:37.452781 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:40.018225 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:40.018690 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:40.018760 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:40.018827 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:40.050881 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:40.050908 1030092 cri.go:89] found id: ""
	I1026 15:10:40.050918 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:40.050978 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:40.055572 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:40.055647 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:40.088582 1030092 cri.go:89] found id: ""
	I1026 15:10:40.088621 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.088632 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:40.088641 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:40.088702 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:40.120034 1030092 cri.go:89] found id: ""
	I1026 15:10:40.120066 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.120076 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:40.120085 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:40.120149 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:40.151346 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:40.151376 1030092 cri.go:89] found id: ""
	I1026 15:10:40.151387 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:40.151453 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:40.159276 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:40.159356 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:40.190963 1030092 cri.go:89] found id: ""
	I1026 15:10:40.190993 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.191004 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:40.191012 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:40.191070 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:40.222082 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:40.222109 1030092 cri.go:89] found id: ""
	I1026 15:10:40.222119 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:40.222204 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:40.226905 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:40.226967 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:40.260965 1030092 cri.go:89] found id: ""
	I1026 15:10:40.260999 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.261010 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:40.261025 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:40.261100 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:40.297012 1030092 cri.go:89] found id: ""
	I1026 15:10:40.297039 1030092 logs.go:282] 0 containers: []
	W1026 15:10:40.297050 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:40.297062 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:40.297079 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:40.313501 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:40.313533 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:10:40.373572 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:10:40.373596 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:40.373612 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:40.417664 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:40.417712 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:40.483616 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:40.483661 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:40.525305 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:40.525341 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:38.520024 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.860236493s)
	I1026 15:10:38.520061 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1026 15:10:38.520093 1074625 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:10:38.520148 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1026 15:10:39.674142 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.153961295s)
	I1026 15:10:39.674192 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1026 15:10:39.674228 1074625 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:10:39.674288 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1026 15:10:42.266629 1072816 kubeadm.go:318] [apiclient] All control plane components are healthy after 5.502468 seconds
	I1026 15:10:42.266804 1072816 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:10:42.280014 1072816 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:10:42.970456 1072816 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:10:42.970768 1072816 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-330914 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:10:43.481696 1072816 kubeadm.go:318] [bootstrap-token] Using token: xh3wal.dc3bxz92s5jgqwbr
	I1026 15:10:43.483250 1072816 out.go:252]   - Configuring RBAC rules ...
	I1026 15:10:43.483439 1072816 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:10:43.488482 1072816 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:10:43.497415 1072816 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:10:43.501420 1072816 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:10:43.506734 1072816 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:10:43.512791 1072816 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:10:43.525458 1072816 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:10:43.754807 1072816 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:10:43.893980 1072816 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:10:43.895266 1072816 kubeadm.go:318] 
	I1026 15:10:43.895362 1072816 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:10:43.895372 1072816 kubeadm.go:318] 
	I1026 15:10:43.895468 1072816 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:10:43.895477 1072816 kubeadm.go:318] 
	I1026 15:10:43.895526 1072816 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:10:43.895598 1072816 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:10:43.895666 1072816 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:10:43.895682 1072816 kubeadm.go:318] 
	I1026 15:10:43.895764 1072816 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:10:43.895773 1072816 kubeadm.go:318] 
	I1026 15:10:43.895836 1072816 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:10:43.895844 1072816 kubeadm.go:318] 
	I1026 15:10:43.895912 1072816 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:10:43.896005 1072816 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:10:43.896116 1072816 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:10:43.896138 1072816 kubeadm.go:318] 
	I1026 15:10:43.896270 1072816 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:10:43.896371 1072816 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:10:43.896381 1072816 kubeadm.go:318] 
	I1026 15:10:43.896485 1072816 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token xh3wal.dc3bxz92s5jgqwbr \
	I1026 15:10:43.896627 1072816 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 15:10:43.896670 1072816 kubeadm.go:318] 	--control-plane 
	I1026 15:10:43.896684 1072816 kubeadm.go:318] 
	I1026 15:10:43.896796 1072816 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:10:43.896806 1072816 kubeadm.go:318] 
	I1026 15:10:43.896900 1072816 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token xh3wal.dc3bxz92s5jgqwbr \
	I1026 15:10:43.897078 1072816 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 15:10:43.899323 1072816 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 15:10:43.899495 1072816 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:10:43.899550 1072816 cni.go:84] Creating CNI manager for ""
	I1026 15:10:43.899563 1072816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:43.902312 1072816 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:10:43.903756 1072816 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:10:43.909822 1072816 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1026 15:10:43.909846 1072816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:10:43.928429 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:10:44.768678 1072816 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:10:44.768751 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:44.768787 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-330914 minikube.k8s.io/updated_at=2025_10_26T15_10_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=old-k8s-version-330914 minikube.k8s.io/primary=true
	I1026 15:10:44.860373 1072816 ops.go:34] apiserver oom_adj: -16
	I1026 15:10:44.860606 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:40.593499 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:40.593540 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:40.633591 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:40.633624 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:43.273535 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:43.274231 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:10:43.274300 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:10:43.274362 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:10:43.305003 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:43.305033 1030092 cri.go:89] found id: ""
	I1026 15:10:43.305045 1030092 logs.go:282] 1 containers: [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:10:43.305121 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:43.309961 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:10:43.310038 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:10:43.345422 1030092 cri.go:89] found id: ""
	I1026 15:10:43.345451 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.345461 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:10:43.345469 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:10:43.345548 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:10:43.382669 1030092 cri.go:89] found id: ""
	I1026 15:10:43.382702 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.382714 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:10:43.382722 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:10:43.382861 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:10:43.415044 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:43.415065 1030092 cri.go:89] found id: ""
	I1026 15:10:43.415075 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:10:43.415132 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:43.419503 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:10:43.419575 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:10:43.450575 1030092 cri.go:89] found id: ""
	I1026 15:10:43.450600 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.450608 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:10:43.450614 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:10:43.450662 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:10:43.482543 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:43.482566 1030092 cri.go:89] found id: ""
	I1026 15:10:43.482577 1030092 logs.go:282] 1 containers: [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:10:43.482630 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:10:43.488081 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:10:43.488205 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:10:43.525646 1030092 cri.go:89] found id: ""
	I1026 15:10:43.525672 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.525684 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:10:43.525692 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:10:43.525763 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:10:43.558403 1030092 cri.go:89] found id: ""
	I1026 15:10:43.558432 1030092 logs.go:282] 0 containers: []
	W1026 15:10:43.558443 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:10:43.558456 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:10:43.558475 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:10:43.599649 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:10:43.599685 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:10:43.658260 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:10:43.658304 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:10:43.693382 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:10:43.693422 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:10:43.791323 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:10:43.791397 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:10:43.848922 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:10:43.848969 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:10:43.972494 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:10:43.972560 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:10:43.991033 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:10:43.991072 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 15:10:43.448396 1074625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.774078503s)
	I1026 15:10:43.448432 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1026 15:10:43.448461 1074625 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:10:43.448507 1074625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1026 15:10:44.149054 1074625 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-841519/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1026 15:10:44.149103 1074625 cache_images.go:124] Successfully loaded all cached images
	I1026 15:10:44.149112 1074625 cache_images.go:93] duration metric: took 12.261214832s to LoadCachedImages
	I1026 15:10:44.149128 1074625 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1026 15:10:44.149251 1074625 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-475081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:10:44.149354 1074625 ssh_runner.go:195] Run: crio config
	I1026 15:10:44.203502 1074625 cni.go:84] Creating CNI manager for ""
	I1026 15:10:44.203533 1074625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:44.203995 1074625 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:10:44.204048 1074625 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-475081 NodeName:no-preload-475081 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:10:44.204213 1074625 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-475081"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:10:44.204277 1074625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:10:44.213593 1074625 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1026 15:10:44.213657 1074625 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1026 15:10:44.222314 1074625 binary.go:78] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1026 15:10:44.222370 1074625 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubelet
	I1026 15:10:44.222409 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1026 15:10:44.222422 1074625 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubeadm
	I1026 15:10:44.227450 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1026 15:10:44.227484 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1026 15:10:44.928824 1074625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:10:44.946284 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1026 15:10:44.950947 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1026 15:10:44.950983 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1026 15:10:45.116204 1074625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1026 15:10:45.120930 1074625 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1026 15:10:45.120963 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/cache/bin/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1026 15:10:45.300674 1074625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:10:45.309407 1074625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:10:45.323054 1074625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:10:45.338714 1074625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1026 15:10:45.352305 1074625 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:10:45.356854 1074625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:10:45.368308 1074625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:45.464417 1074625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:10:45.492027 1074625 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081 for IP: 192.168.103.2
	I1026 15:10:45.492050 1074625 certs.go:195] generating shared ca certs ...
	I1026 15:10:45.492072 1074625 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.492245 1074625 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:10:45.492304 1074625 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:10:45.492319 1074625 certs.go:257] generating profile certs ...
	I1026 15:10:45.492384 1074625 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.key
	I1026 15:10:45.492401 1074625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt with IP's: []
	I1026 15:10:45.573728 1074625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt ...
	I1026 15:10:45.573764 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: {Name:mk1c68b47d96bf0fa064d0c385a591ce7192cb40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.573986 1074625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.key ...
	I1026 15:10:45.574005 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.key: {Name:mk8ff9c5efe791a217f5aec77adc1e800bdbc1cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.574141 1074625 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key.309b7b8c
	I1026 15:10:45.574173 1074625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt.309b7b8c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1026 15:10:45.602030 1074625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt.309b7b8c ...
	I1026 15:10:45.602063 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt.309b7b8c: {Name:mk3c4f606bb3b01f4ead75fd7c60c12657747164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.602271 1074625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key.309b7b8c ...
	I1026 15:10:45.602294 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key.309b7b8c: {Name:mkb11285270b24fcdbbedfae253bcf6b4adebe83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.602407 1074625 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt.309b7b8c -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt
	I1026 15:10:45.602512 1074625 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key.309b7b8c -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key
	I1026 15:10:45.602603 1074625 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.key
	I1026 15:10:45.602626 1074625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.crt with IP's: []
	I1026 15:10:45.764536 1074625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.crt ...
	I1026 15:10:45.764572 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.crt: {Name:mk1b8448ab2933df1fd6cf4ba85128cd72f09cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.764797 1074625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.key ...
	I1026 15:10:45.764827 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.key: {Name:mka0a1561f58707904c136e3363859092ae2794d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:45.765044 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:10:45.765082 1074625 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:10:45.765096 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:10:45.765117 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:10:45.765142 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:10:45.765179 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:10:45.765216 1074625 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:10:45.765896 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:10:45.785904 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:10:45.805307 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:10:45.824753 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:10:45.843848 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:10:45.863063 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:10:45.882732 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:10:45.902272 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:10:45.923637 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:10:45.944891 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:10:45.963627 1074625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:10:45.981966 1074625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:10:45.995743 1074625 ssh_runner.go:195] Run: openssl version
	I1026 15:10:46.002539 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:10:46.012149 1074625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:10:46.016321 1074625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:10:46.016384 1074625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:10:46.050687 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:10:46.059947 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:10:46.068580 1074625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:46.072700 1074625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:46.072759 1074625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:10:46.108914 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:10:46.118716 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:10:46.128120 1074625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:10:46.132650 1074625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:10:46.132705 1074625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:10:46.167932 1074625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:10:46.177719 1074625 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:10:46.181926 1074625 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:10:46.181982 1074625 kubeadm.go:400] StartCluster: {Name:no-preload-475081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-475081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:10:46.182082 1074625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:10:46.182156 1074625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:10:46.211693 1074625 cri.go:89] found id: ""
	I1026 15:10:46.211755 1074625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:10:46.220472 1074625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:10:46.229220 1074625 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:10:46.229277 1074625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:10:46.238052 1074625 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:10:46.238072 1074625 kubeadm.go:157] found existing configuration files:
	
	I1026 15:10:46.238112 1074625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:10:46.246680 1074625 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:10:46.246761 1074625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:10:46.255221 1074625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:10:46.263862 1074625 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:10:46.263939 1074625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:10:46.271979 1074625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:10:46.280209 1074625 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:10:46.280270 1074625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:10:46.288528 1074625 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:10:46.297130 1074625 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:10:46.297217 1074625 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:10:46.305311 1074625 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:10:46.342542 1074625 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:10:46.342630 1074625 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:10:46.365812 1074625 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:10:46.365895 1074625 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:10:46.365948 1074625 kubeadm.go:318] OS: Linux
	I1026 15:10:46.366013 1074625 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:10:46.366084 1074625 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:10:46.366156 1074625 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:10:46.366256 1074625 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:10:46.366327 1074625 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:10:46.366407 1074625 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:10:46.366487 1074625 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:10:46.366550 1074625 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:10:46.434487 1074625 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:10:46.434684 1074625 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:10:46.434850 1074625 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:10:46.449417 1074625 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:10:46.451638 1074625 out.go:252]   - Generating certificates and keys ...
	I1026 15:10:46.451718 1074625 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:10:46.451799 1074625 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:10:46.544980 1074625 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:10:46.942896 1074625 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:10:45.361129 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:45.861002 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:46.361407 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:46.861087 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:47.361108 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:47.861287 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:48.361364 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:48.861614 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:49.360835 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:49.860964 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:47.293244 1074625 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:10:47.587210 1074625 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:10:47.722490 1074625 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:10:47.722658 1074625 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-475081] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1026 15:10:48.073995 1074625 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:10:48.074207 1074625 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-475081] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1026 15:10:48.513259 1074625 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:10:48.879824 1074625 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:10:49.408563 1074625 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:10:49.408631 1074625 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:10:49.740887 1074625 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:10:49.781069 1074625 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:10:50.006512 1074625 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:10:50.126628 1074625 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:10:50.678470 1074625 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:10:50.679149 1074625 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:10:50.684540 1074625 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:10:50.688380 1074625 out.go:252]   - Booting up control plane ...
	I1026 15:10:50.688510 1074625 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:10:50.688604 1074625 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:10:50.688687 1074625 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:10:50.705448 1074625 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:10:50.705644 1074625 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:10:50.714300 1074625 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:10:50.714551 1074625 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:10:50.714643 1074625 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:10:50.827921 1074625 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:10:50.828130 1074625 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:10:51.829638 1074625 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001782547s
	I1026 15:10:51.832732 1074625 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:10:51.832890 1074625 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1026 15:10:51.833025 1074625 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:10:51.833156 1074625 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:10:50.361214 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:50.861506 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:51.361237 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:51.860933 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:52.361411 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:52.861266 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:53.360749 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:53.861343 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:54.360799 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:54.861123 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:52.994362 1074625 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.161540163s
	I1026 15:10:53.884480 1074625 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.051529219s
	I1026 15:10:55.334524 1074625 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501853922s
	I1026 15:10:55.349125 1074625 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:10:55.360947 1074625 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:10:55.383433 1074625 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:10:55.383698 1074625 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-475081 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:10:55.393461 1074625 kubeadm.go:318] [bootstrap-token] Using token: nw95n1.djczsarbkw9vs3el
	I1026 15:10:54.064311 1030092 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.073214314s)
	W1026 15:10:54.064366 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1026 15:10:55.362183 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:55.861389 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:56.360928 1072816 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:56.439354 1072816 kubeadm.go:1113] duration metric: took 11.670664157s to wait for elevateKubeSystemPrivileges
	I1026 15:10:56.439391 1072816 kubeadm.go:402] duration metric: took 23.189428634s to StartCluster
	I1026 15:10:56.439415 1072816 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:56.439491 1072816 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:10:56.440806 1072816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:10:56.441086 1072816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:10:56.441089 1072816 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:10:56.441151 1072816 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:10:56.441307 1072816 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-330914"
	I1026 15:10:56.441336 1072816 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-330914"
	I1026 15:10:56.441337 1072816 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-330914"
	I1026 15:10:56.441356 1072816 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:10:56.441362 1072816 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-330914"
	I1026 15:10:56.441373 1072816 host.go:66] Checking if "old-k8s-version-330914" exists ...
	I1026 15:10:56.441763 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:56.442039 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:56.442757 1072816 out.go:179] * Verifying Kubernetes components...
	I1026 15:10:56.444270 1072816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:10:56.467649 1072816 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-330914"
	I1026 15:10:56.467701 1072816 host.go:66] Checking if "old-k8s-version-330914" exists ...
	I1026 15:10:56.468466 1072816 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:10:56.468484 1072816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:10:55.394882 1074625 out.go:252]   - Configuring RBAC rules ...
	I1026 15:10:55.395049 1074625 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:10:55.399256 1074625 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:10:55.405568 1074625 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:10:55.408575 1074625 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:10:55.411658 1074625 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:10:55.415608 1074625 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:10:55.740841 1074625 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:10:56.156245 1074625 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:10:56.740762 1074625 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:10:56.741898 1074625 kubeadm.go:318] 
	I1026 15:10:56.741988 1074625 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:10:56.742003 1074625 kubeadm.go:318] 
	I1026 15:10:56.742237 1074625 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:10:56.742268 1074625 kubeadm.go:318] 
	I1026 15:10:56.742306 1074625 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:10:56.742382 1074625 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:10:56.742447 1074625 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:10:56.742483 1074625 kubeadm.go:318] 
	I1026 15:10:56.742567 1074625 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:10:56.742581 1074625 kubeadm.go:318] 
	I1026 15:10:56.742638 1074625 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:10:56.742649 1074625 kubeadm.go:318] 
	I1026 15:10:56.742717 1074625 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:10:56.742983 1074625 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:10:56.743113 1074625 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:10:56.743125 1074625 kubeadm.go:318] 
	I1026 15:10:56.743289 1074625 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:10:56.743395 1074625 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:10:56.743406 1074625 kubeadm.go:318] 
	I1026 15:10:56.743523 1074625 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token nw95n1.djczsarbkw9vs3el \
	I1026 15:10:56.743675 1074625 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 15:10:56.743719 1074625 kubeadm.go:318] 	--control-plane 
	I1026 15:10:56.743729 1074625 kubeadm.go:318] 
	I1026 15:10:56.743850 1074625 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:10:56.743861 1074625 kubeadm.go:318] 
	I1026 15:10:56.743976 1074625 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token nw95n1.djczsarbkw9vs3el \
	I1026 15:10:56.744123 1074625 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 15:10:56.746845 1074625 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 15:10:56.747008 1074625 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:10:56.747040 1074625 cni.go:84] Creating CNI manager for ""
	I1026 15:10:56.747050 1074625 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:10:56.750026 1074625 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:10:56.751356 1074625 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:10:56.757248 1074625 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:10:56.757273 1074625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:10:56.774800 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:10:56.469932 1072816 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:10:56.469955 1072816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:10:56.470019 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:56.504723 1072816 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:10:56.504751 1072816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:10:56.504829 1072816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:10:56.515311 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:56.538782 1072816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:10:56.554662 1072816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:10:56.602563 1072816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:10:56.641263 1072816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:10:56.660140 1072816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:10:56.799357 1072816 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 15:10:56.800805 1072816 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-330914" to be "Ready" ...
	I1026 15:10:57.119527 1072816 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:10:57.120858 1072816 addons.go:514] duration metric: took 679.703896ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:10:57.304771 1072816 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-330914" context rescaled to 1 replicas
	W1026 15:10:58.804053 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	I1026 15:10:56.564521 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:10:57.097500 1074625 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:10:57.097594 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:57.097690 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-475081 minikube.k8s.io/updated_at=2025_10_26T15_10_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=no-preload-475081 minikube.k8s.io/primary=true
	I1026 15:10:57.113722 1074625 ops.go:34] apiserver oom_adj: -16
	I1026 15:10:57.197288 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:57.697812 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:58.197426 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:58.698241 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:59.197487 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:10:59.697418 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:00.197572 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:00.697972 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:01.197817 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:01.698041 1074625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:11:01.775435 1074625 kubeadm.go:1113] duration metric: took 4.677917592s to wait for elevateKubeSystemPrivileges
	I1026 15:11:01.775471 1074625 kubeadm.go:402] duration metric: took 15.59349307s to StartCluster
	I1026 15:11:01.775495 1074625 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:11:01.775575 1074625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:11:01.776938 1074625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:11:01.777226 1074625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:11:01.777236 1074625 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:11:01.777303 1074625 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:11:01.777422 1074625 addons.go:69] Setting storage-provisioner=true in profile "no-preload-475081"
	I1026 15:11:01.777437 1074625 addons.go:69] Setting default-storageclass=true in profile "no-preload-475081"
	I1026 15:11:01.777458 1074625 config.go:182] Loaded profile config "no-preload-475081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:11:01.777465 1074625 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-475081"
	I1026 15:11:01.777443 1074625 addons.go:238] Setting addon storage-provisioner=true in "no-preload-475081"
	I1026 15:11:01.777528 1074625 host.go:66] Checking if "no-preload-475081" exists ...
	I1026 15:11:01.777838 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:11:01.778079 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:11:01.779866 1074625 out.go:179] * Verifying Kubernetes components...
	I1026 15:11:01.782268 1074625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:11:01.808631 1074625 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:11:01.809013 1074625 addons.go:238] Setting addon default-storageclass=true in "no-preload-475081"
	I1026 15:11:01.809066 1074625 host.go:66] Checking if "no-preload-475081" exists ...
	I1026 15:11:01.809679 1074625 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:11:01.810802 1074625 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:11:01.810831 1074625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:11:01.810897 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:11:01.842886 1074625 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:11:01.842913 1074625 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:11:01.842981 1074625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:11:01.844290 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:11:01.873435 1074625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:11:01.901089 1074625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:11:01.977866 1074625 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:11:01.983106 1074625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:11:01.994377 1074625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:11:02.102811 1074625 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1026 15:11:02.103807 1074625 node_ready.go:35] waiting up to 6m0s for node "no-preload-475081" to be "Ready" ...
	I1026 15:11:02.327018 1074625 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1026 15:11:01.304149 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	W1026 15:11:03.304444 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	I1026 15:11:01.565457 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 15:11:01.565526 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:11:01.565592 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:11:01.595320 1030092 cri.go:89] found id: "0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:01.595349 1030092 cri.go:89] found id: "a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	I1026 15:11:01.595355 1030092 cri.go:89] found id: ""
	I1026 15:11:01.595366 1030092 logs.go:282] 2 containers: [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]
	I1026 15:11:01.595433 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.599897 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.604236 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:11:01.604320 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:11:01.638070 1030092 cri.go:89] found id: ""
	I1026 15:11:01.638100 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.638113 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:11:01.638121 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:11:01.638265 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:11:01.671222 1030092 cri.go:89] found id: ""
	I1026 15:11:01.671258 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.671269 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:11:01.671288 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:11:01.671367 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:11:01.702120 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:01.702154 1030092 cri.go:89] found id: ""
	I1026 15:11:01.702180 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:11:01.702245 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.707228 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:11:01.707304 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:11:01.740378 1030092 cri.go:89] found id: ""
	I1026 15:11:01.740410 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.740422 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:11:01.740430 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:11:01.740490 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:11:01.773069 1030092 cri.go:89] found id: "51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:01.773102 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:01.773107 1030092 cri.go:89] found id: ""
	I1026 15:11:01.773118 1030092 logs.go:282] 2 containers: [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:11:01.773198 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.777963 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:01.783689 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:11:01.783782 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:11:01.831997 1030092 cri.go:89] found id: ""
	I1026 15:11:01.832030 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.832042 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:11:01.832050 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:11:01.832121 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:11:01.882822 1030092 cri.go:89] found id: ""
	I1026 15:11:01.882853 1030092 logs.go:282] 0 containers: []
	W1026 15:11:01.882923 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:11:01.882949 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:11:01.882965 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:11:02.004970 1030092 logs.go:123] Gathering logs for kube-apiserver [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703] ...
	I1026 15:11:02.005012 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:02.056875 1030092 logs.go:123] Gathering logs for kube-controller-manager [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b] ...
	I1026 15:11:02.056922 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:02.091785 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:11:02.091833 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:02.130241 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:11:02.130278 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:11:02.152749 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:11:02.152786 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1026 15:11:02.328238 1074625 addons.go:514] duration metric: took 550.948627ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:11:02.607897 1074625 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-475081" context rescaled to 1 replicas
	W1026 15:11:04.107787 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	W1026 15:11:06.606743 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	W1026 15:11:05.804721 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	W1026 15:11:08.304346 1072816 node_ready.go:57] node "old-k8s-version-330914" has "Ready":"False" status (will retry)
	I1026 15:11:09.803688 1072816 node_ready.go:49] node "old-k8s-version-330914" is "Ready"
	I1026 15:11:09.803716 1072816 node_ready.go:38] duration metric: took 13.00287656s for node "old-k8s-version-330914" to be "Ready" ...
	I1026 15:11:09.803732 1072816 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:11:09.803798 1072816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:11:09.816563 1072816 api_server.go:72] duration metric: took 13.375438769s to wait for apiserver process to appear ...
	I1026 15:11:09.816590 1072816 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:11:09.816611 1072816 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 15:11:09.820927 1072816 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 15:11:09.822208 1072816 api_server.go:141] control plane version: v1.28.0
	I1026 15:11:09.822238 1072816 api_server.go:131] duration metric: took 5.639605ms to wait for apiserver health ...
	I1026 15:11:09.822250 1072816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:11:09.828117 1072816 system_pods.go:59] 8 kube-system pods found
	I1026 15:11:09.828153 1072816 system_pods.go:61] "coredns-5dd5756b68-hzjqn" [21211baf-4153-41c8-aacc-6d313dcdef82] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:09.828159 1072816 system_pods.go:61] "etcd-old-k8s-version-330914" [cb37b501-d930-4a0e-8e96-9aa97fbfef91] Running
	I1026 15:11:09.828176 1072816 system_pods.go:61] "kindnet-b8hhx" [522edddb-fb4b-4e11-a49f-48843f236bab] Running
	I1026 15:11:09.828180 1072816 system_pods.go:61] "kube-apiserver-old-k8s-version-330914" [d1f54bcd-dcc1-4654-90ab-765846ebeaf7] Running
	I1026 15:11:09.828185 1072816 system_pods.go:61] "kube-controller-manager-old-k8s-version-330914" [73822523-0f7b-41ad-a7ed-5cf10ec4480a] Running
	I1026 15:11:09.828188 1072816 system_pods.go:61] "kube-proxy-829lp" [b212cf79-e2d5-49ef-9e66-80ffcd18774f] Running
	I1026 15:11:09.828192 1072816 system_pods.go:61] "kube-scheduler-old-k8s-version-330914" [3b01ee94-ea99-49d9-9a73-e2cba374721f] Running
	I1026 15:11:09.828197 1072816 system_pods.go:61] "storage-provisioner" [d505b114-6834-4c0b-858b-a785482ca1ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:09.828204 1072816 system_pods.go:74] duration metric: took 5.946507ms to wait for pod list to return data ...
	I1026 15:11:09.828215 1072816 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:11:09.830701 1072816 default_sa.go:45] found service account: "default"
	I1026 15:11:09.830739 1072816 default_sa.go:55] duration metric: took 2.516755ms for default service account to be created ...
	I1026 15:11:09.830751 1072816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:11:09.833947 1072816 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:09.833980 1072816 system_pods.go:89] "coredns-5dd5756b68-hzjqn" [21211baf-4153-41c8-aacc-6d313dcdef82] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:09.833986 1072816 system_pods.go:89] "etcd-old-k8s-version-330914" [cb37b501-d930-4a0e-8e96-9aa97fbfef91] Running
	I1026 15:11:09.833993 1072816 system_pods.go:89] "kindnet-b8hhx" [522edddb-fb4b-4e11-a49f-48843f236bab] Running
	I1026 15:11:09.834001 1072816 system_pods.go:89] "kube-apiserver-old-k8s-version-330914" [d1f54bcd-dcc1-4654-90ab-765846ebeaf7] Running
	I1026 15:11:09.834008 1072816 system_pods.go:89] "kube-controller-manager-old-k8s-version-330914" [73822523-0f7b-41ad-a7ed-5cf10ec4480a] Running
	I1026 15:11:09.834012 1072816 system_pods.go:89] "kube-proxy-829lp" [b212cf79-e2d5-49ef-9e66-80ffcd18774f] Running
	I1026 15:11:09.834015 1072816 system_pods.go:89] "kube-scheduler-old-k8s-version-330914" [3b01ee94-ea99-49d9-9a73-e2cba374721f] Running
	I1026 15:11:09.834020 1072816 system_pods.go:89] "storage-provisioner" [d505b114-6834-4c0b-858b-a785482ca1ec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:09.834048 1072816 retry.go:31] will retry after 262.214269ms: missing components: kube-dns
	I1026 15:11:10.100403 1072816 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:10.100434 1072816 system_pods.go:89] "coredns-5dd5756b68-hzjqn" [21211baf-4153-41c8-aacc-6d313dcdef82] Running
	I1026 15:11:10.100443 1072816 system_pods.go:89] "etcd-old-k8s-version-330914" [cb37b501-d930-4a0e-8e96-9aa97fbfef91] Running
	I1026 15:11:10.100448 1072816 system_pods.go:89] "kindnet-b8hhx" [522edddb-fb4b-4e11-a49f-48843f236bab] Running
	I1026 15:11:10.100453 1072816 system_pods.go:89] "kube-apiserver-old-k8s-version-330914" [d1f54bcd-dcc1-4654-90ab-765846ebeaf7] Running
	I1026 15:11:10.100461 1072816 system_pods.go:89] "kube-controller-manager-old-k8s-version-330914" [73822523-0f7b-41ad-a7ed-5cf10ec4480a] Running
	I1026 15:11:10.100465 1072816 system_pods.go:89] "kube-proxy-829lp" [b212cf79-e2d5-49ef-9e66-80ffcd18774f] Running
	I1026 15:11:10.100470 1072816 system_pods.go:89] "kube-scheduler-old-k8s-version-330914" [3b01ee94-ea99-49d9-9a73-e2cba374721f] Running
	I1026 15:11:10.100474 1072816 system_pods.go:89] "storage-provisioner" [d505b114-6834-4c0b-858b-a785482ca1ec] Running
	I1026 15:11:10.100485 1072816 system_pods.go:126] duration metric: took 269.725842ms to wait for k8s-apps to be running ...
	I1026 15:11:10.100500 1072816 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:11:10.100551 1072816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:11:10.114894 1072816 system_svc.go:56] duration metric: took 14.384166ms WaitForService to wait for kubelet
	I1026 15:11:10.114921 1072816 kubeadm.go:586] duration metric: took 13.6738053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:11:10.114939 1072816 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:11:10.117463 1072816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:11:10.117488 1072816 node_conditions.go:123] node cpu capacity is 8
	I1026 15:11:10.117503 1072816 node_conditions.go:105] duration metric: took 2.559987ms to run NodePressure ...
	I1026 15:11:10.117516 1072816 start.go:241] waiting for startup goroutines ...
	I1026 15:11:10.117523 1072816 start.go:246] waiting for cluster config update ...
	I1026 15:11:10.117533 1072816 start.go:255] writing updated cluster config ...
	I1026 15:11:10.117783 1072816 ssh_runner.go:195] Run: rm -f paused
	I1026 15:11:10.121775 1072816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:10.125920 1072816 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-hzjqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.130518 1072816 pod_ready.go:94] pod "coredns-5dd5756b68-hzjqn" is "Ready"
	I1026 15:11:10.130544 1072816 pod_ready.go:86] duration metric: took 4.603177ms for pod "coredns-5dd5756b68-hzjqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.133391 1072816 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.137337 1072816 pod_ready.go:94] pod "etcd-old-k8s-version-330914" is "Ready"
	I1026 15:11:10.137356 1072816 pod_ready.go:86] duration metric: took 3.942349ms for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.140016 1072816 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.144535 1072816 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-330914" is "Ready"
	I1026 15:11:10.144557 1072816 pod_ready.go:86] duration metric: took 4.519342ms for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.147293 1072816 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:05.612330 1030092 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.459518512s)
	W1026 15:11:05.612379 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60576->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:60576->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1026 15:11:05.612392 1030092 logs.go:123] Gathering logs for kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c] ...
	I1026 15:11:05.612409 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	W1026 15:11:05.640274 1030092 logs.go:130] failed kube-apiserver [a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c": Process exited with status 1
	stdout:
	
	stderr:
	E1026 15:11:05.637593    5077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c\": container with ID starting with a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c not found: ID does not exist" containerID="a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	time="2025-10-26T15:11:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c\": container with ID starting with a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c not found: ID does not exist"
	 output: 
	** stderr ** 
	E1026 15:11:05.637593    5077 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c\": container with ID starting with a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c not found: ID does not exist" containerID="a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c"
	time="2025-10-26T15:11:05Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c\": container with ID starting with a5ffe7541560608968783892c8ca691483123c39be8720ed941d6c30e39fe21c not found: ID does not exist"
	
	** /stderr **
	I1026 15:11:05.640299 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:11:05.640313 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:05.695066 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:11:05.695104 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:11:05.754458 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:11:05.754496 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:11:08.287645 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:11:08.288120 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:11:08.288229 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:11:08.288297 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:11:08.317543 1030092 cri.go:89] found id: "0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:08.317572 1030092 cri.go:89] found id: ""
	I1026 15:11:08.317581 1030092 logs.go:282] 1 containers: [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703]
	I1026 15:11:08.317644 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:08.321906 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:11:08.321980 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:11:08.349672 1030092 cri.go:89] found id: ""
	I1026 15:11:08.349701 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.349712 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:11:08.349720 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:11:08.349780 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:11:08.378612 1030092 cri.go:89] found id: ""
	I1026 15:11:08.378636 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.378643 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:11:08.378648 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:11:08.378695 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:11:08.407328 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:08.407354 1030092 cri.go:89] found id: ""
	I1026 15:11:08.407363 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:11:08.407417 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:08.411875 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:11:08.411950 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:11:08.439929 1030092 cri.go:89] found id: ""
	I1026 15:11:08.439958 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.439968 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:11:08.439975 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:11:08.440045 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:11:08.469571 1030092 cri.go:89] found id: "51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:08.469592 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:08.469595 1030092 cri.go:89] found id: ""
	I1026 15:11:08.469604 1030092 logs.go:282] 2 containers: [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:11:08.469657 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:08.474409 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:08.478499 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:11:08.478575 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:11:08.506729 1030092 cri.go:89] found id: ""
	I1026 15:11:08.506756 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.506764 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:11:08.506771 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:11:08.506834 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:11:08.535880 1030092 cri.go:89] found id: ""
	I1026 15:11:08.535904 1030092 logs.go:282] 0 containers: []
	W1026 15:11:08.535919 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:11:08.535934 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:11:08.535946 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:11:08.552721 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:11:08.552751 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:11:08.612471 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:11:08.612495 1030092 logs.go:123] Gathering logs for kube-apiserver [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703] ...
	I1026 15:11:08.612512 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:08.646548 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:11:08.646586 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:08.675694 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:11:08.675727 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:11:08.733251 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:11:08.733284 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:11:08.830310 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:11:08.830352 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:08.883994 1030092 logs.go:123] Gathering logs for kube-controller-manager [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b] ...
	I1026 15:11:08.884033 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:08.917368 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:11:08.917408 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:11:10.526539 1072816 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-330914" is "Ready"
	I1026 15:11:10.526570 1072816 pod_ready.go:86] duration metric: took 379.257211ms for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:10.726283 1072816 pod_ready.go:83] waiting for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:11.126673 1072816 pod_ready.go:94] pod "kube-proxy-829lp" is "Ready"
	I1026 15:11:11.126698 1072816 pod_ready.go:86] duration metric: took 400.390007ms for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:11.326427 1072816 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:11.725615 1072816 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-330914" is "Ready"
	I1026 15:11:11.725651 1072816 pod_ready.go:86] duration metric: took 399.197469ms for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:11.725668 1072816 pod_ready.go:40] duration metric: took 1.603861334s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:11.774782 1072816 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1026 15:11:11.776554 1072816 out.go:203] 
	W1026 15:11:11.777833 1072816 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:11:11.779133 1072816 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:11:11.780349 1072816 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-330914" cluster and "default" namespace by default
	W1026 15:11:08.607057 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	W1026 15:11:11.107346 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	I1026 15:11:11.450586 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:11:11.451209 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:11:11.451282 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:11:11.451347 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:11:11.480980 1030092 cri.go:89] found id: "0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:11.481008 1030092 cri.go:89] found id: ""
	I1026 15:11:11.481018 1030092 logs.go:282] 1 containers: [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703]
	I1026 15:11:11.481081 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:11.485354 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:11:11.485429 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:11:11.513983 1030092 cri.go:89] found id: ""
	I1026 15:11:11.514011 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.514024 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:11:11.514031 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:11:11.514094 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:11:11.546131 1030092 cri.go:89] found id: ""
	I1026 15:11:11.546157 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.546182 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:11:11.546190 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:11:11.546259 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:11:11.575322 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:11.575346 1030092 cri.go:89] found id: ""
	I1026 15:11:11.575356 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:11:11.575425 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:11.579711 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:11:11.579801 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:11:11.609378 1030092 cri.go:89] found id: ""
	I1026 15:11:11.609405 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.609415 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:11:11.609423 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:11:11.609485 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:11:11.637703 1030092 cri.go:89] found id: "51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:11.637729 1030092 cri.go:89] found id: "fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:11.637734 1030092 cri.go:89] found id: ""
	I1026 15:11:11.637745 1030092 logs.go:282] 2 containers: [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1]
	I1026 15:11:11.637818 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:11.642074 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:11.646190 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:11:11.646262 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:11:11.676915 1030092 cri.go:89] found id: ""
	I1026 15:11:11.676943 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.676953 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:11:11.676959 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:11:11.677007 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:11:11.707840 1030092 cri.go:89] found id: ""
	I1026 15:11:11.707869 1030092 logs.go:282] 0 containers: []
	W1026 15:11:11.707878 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:11:11.707893 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:11:11.707904 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:11:11.767156 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:11:11.767201 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:11:11.805157 1030092 logs.go:123] Gathering logs for kube-controller-manager [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b] ...
	I1026 15:11:11.805204 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:11.834187 1030092 logs.go:123] Gathering logs for kube-controller-manager [fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1] ...
	I1026 15:11:11.834227 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fe017e3a6b84bb07a11cb153b3c483f6beebb9f00e06807b2485eaea07e756b1"
	I1026 15:11:11.865256 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:11:11.865286 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:11:11.973050 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:11:11.973090 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:11:11.990773 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:11:11.990817 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:11:12.050222 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:11:12.050252 1030092 logs.go:123] Gathering logs for kube-apiserver [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703] ...
	I1026 15:11:12.050271 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:12.085010 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:11:12.085054 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:14.644225 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:11:14.644669 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:11:14.644729 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:11:14.644794 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:11:14.676017 1030092 cri.go:89] found id: "0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:14.676042 1030092 cri.go:89] found id: ""
	I1026 15:11:14.676053 1030092 logs.go:282] 1 containers: [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703]
	I1026 15:11:14.676114 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:14.680608 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:11:14.680688 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:11:14.711808 1030092 cri.go:89] found id: ""
	I1026 15:11:14.711845 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.711856 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:11:14.711863 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:11:14.711931 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:11:14.741627 1030092 cri.go:89] found id: ""
	I1026 15:11:14.741657 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.741667 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:11:14.741675 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:11:14.741724 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:11:14.769937 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:14.769964 1030092 cri.go:89] found id: ""
	I1026 15:11:14.769976 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:11:14.770028 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:14.774432 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:11:14.774500 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:11:14.803136 1030092 cri.go:89] found id: ""
	I1026 15:11:14.803206 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.803221 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:11:14.803234 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:11:14.803297 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:11:14.832287 1030092 cri.go:89] found id: "51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:14.832314 1030092 cri.go:89] found id: ""
	I1026 15:11:14.832325 1030092 logs.go:282] 1 containers: [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b]
	I1026 15:11:14.832386 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:14.836724 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:11:14.836797 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:11:14.865490 1030092 cri.go:89] found id: ""
	I1026 15:11:14.865535 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.865547 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:11:14.865555 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:11:14.865623 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:11:14.894500 1030092 cri.go:89] found id: ""
	I1026 15:11:14.894526 1030092 logs.go:282] 0 containers: []
	W1026 15:11:14.894534 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:11:14.894544 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:11:14.894557 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:14.950065 1030092 logs.go:123] Gathering logs for kube-controller-manager [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b] ...
	I1026 15:11:14.950107 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:14.980621 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:11:14.980655 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:11:15.039070 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:11:15.039110 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:11:15.075882 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:11:15.075929 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:11:15.171642 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:11:15.171678 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:11:15.188099 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:11:15.188131 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:11:15.246139 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:11:15.246178 1030092 logs.go:123] Gathering logs for kube-apiserver [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703] ...
	I1026 15:11:15.246195 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	W1026 15:11:13.607101 1074625 node_ready.go:57] node "no-preload-475081" has "Ready":"False" status (will retry)
	I1026 15:11:14.607464 1074625 node_ready.go:49] node "no-preload-475081" is "Ready"
	I1026 15:11:14.607495 1074625 node_ready.go:38] duration metric: took 12.503660845s for node "no-preload-475081" to be "Ready" ...
	I1026 15:11:14.607512 1074625 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:11:14.607596 1074625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:11:14.625506 1074625 api_server.go:72] duration metric: took 12.848228717s to wait for apiserver process to appear ...
	I1026 15:11:14.625538 1074625 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:11:14.625561 1074625 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:11:14.630694 1074625 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 15:11:14.631772 1074625 api_server.go:141] control plane version: v1.34.1
	I1026 15:11:14.631802 1074625 api_server.go:131] duration metric: took 6.25545ms to wait for apiserver health ...
	I1026 15:11:14.631814 1074625 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:11:14.637467 1074625 system_pods.go:59] 8 kube-system pods found
	I1026 15:11:14.637510 1074625 system_pods.go:61] "coredns-66bc5c9577-knr22" [4ba1a7ff-bfea-43bf-b65e-2b1309709ac4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:14.637525 1074625 system_pods.go:61] "etcd-no-preload-475081" [517b1527-7a3f-4127-8911-3d16611e2468] Running
	I1026 15:11:14.637533 1074625 system_pods.go:61] "kindnet-7cnvx" [e36c0ce9-7f97-4d93-a199-92e1d130eb0b] Running
	I1026 15:11:14.637539 1074625 system_pods.go:61] "kube-apiserver-no-preload-475081" [ee5497a8-ed40-496e-b36e-370bb14c3fad] Running
	I1026 15:11:14.637545 1074625 system_pods.go:61] "kube-controller-manager-no-preload-475081" [9bae4029-8dac-4406-83b1-6318c3ea749c] Running
	I1026 15:11:14.637550 1074625 system_pods.go:61] "kube-proxy-smtlg" [5b84f479-f0f8-4260-bd71-ce14b36bae0d] Running
	I1026 15:11:14.637568 1074625 system_pods.go:61] "kube-scheduler-no-preload-475081" [db077d81-a03f-4886-b004-749606cfcdca] Running
	I1026 15:11:14.637575 1074625 system_pods.go:61] "storage-provisioner" [15518fa4-cf2c-44fe-8b16-e222dcbae51f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:14.637584 1074625 system_pods.go:74] duration metric: took 5.762278ms to wait for pod list to return data ...
	I1026 15:11:14.637596 1074625 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:11:14.641112 1074625 default_sa.go:45] found service account: "default"
	I1026 15:11:14.641144 1074625 default_sa.go:55] duration metric: took 3.540551ms for default service account to be created ...
	I1026 15:11:14.641155 1074625 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:11:14.736917 1074625 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:14.736956 1074625 system_pods.go:89] "coredns-66bc5c9577-knr22" [4ba1a7ff-bfea-43bf-b65e-2b1309709ac4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:14.736965 1074625 system_pods.go:89] "etcd-no-preload-475081" [517b1527-7a3f-4127-8911-3d16611e2468] Running
	I1026 15:11:14.736974 1074625 system_pods.go:89] "kindnet-7cnvx" [e36c0ce9-7f97-4d93-a199-92e1d130eb0b] Running
	I1026 15:11:14.736980 1074625 system_pods.go:89] "kube-apiserver-no-preload-475081" [ee5497a8-ed40-496e-b36e-370bb14c3fad] Running
	I1026 15:11:14.736986 1074625 system_pods.go:89] "kube-controller-manager-no-preload-475081" [9bae4029-8dac-4406-83b1-6318c3ea749c] Running
	I1026 15:11:14.737004 1074625 system_pods.go:89] "kube-proxy-smtlg" [5b84f479-f0f8-4260-bd71-ce14b36bae0d] Running
	I1026 15:11:14.737013 1074625 system_pods.go:89] "kube-scheduler-no-preload-475081" [db077d81-a03f-4886-b004-749606cfcdca] Running
	I1026 15:11:14.737020 1074625 system_pods.go:89] "storage-provisioner" [15518fa4-cf2c-44fe-8b16-e222dcbae51f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:14.737056 1074625 retry.go:31] will retry after 219.983144ms: missing components: kube-dns
	I1026 15:11:14.961636 1074625 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:14.961668 1074625 system_pods.go:89] "coredns-66bc5c9577-knr22" [4ba1a7ff-bfea-43bf-b65e-2b1309709ac4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:11:14.961675 1074625 system_pods.go:89] "etcd-no-preload-475081" [517b1527-7a3f-4127-8911-3d16611e2468] Running
	I1026 15:11:14.961683 1074625 system_pods.go:89] "kindnet-7cnvx" [e36c0ce9-7f97-4d93-a199-92e1d130eb0b] Running
	I1026 15:11:14.961696 1074625 system_pods.go:89] "kube-apiserver-no-preload-475081" [ee5497a8-ed40-496e-b36e-370bb14c3fad] Running
	I1026 15:11:14.961700 1074625 system_pods.go:89] "kube-controller-manager-no-preload-475081" [9bae4029-8dac-4406-83b1-6318c3ea749c] Running
	I1026 15:11:14.961703 1074625 system_pods.go:89] "kube-proxy-smtlg" [5b84f479-f0f8-4260-bd71-ce14b36bae0d] Running
	I1026 15:11:14.961706 1074625 system_pods.go:89] "kube-scheduler-no-preload-475081" [db077d81-a03f-4886-b004-749606cfcdca] Running
	I1026 15:11:14.961710 1074625 system_pods.go:89] "storage-provisioner" [15518fa4-cf2c-44fe-8b16-e222dcbae51f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:11:14.961727 1074625 retry.go:31] will retry after 370.983761ms: missing components: kube-dns
	I1026 15:11:15.337320 1074625 system_pods.go:86] 8 kube-system pods found
	I1026 15:11:15.337350 1074625 system_pods.go:89] "coredns-66bc5c9577-knr22" [4ba1a7ff-bfea-43bf-b65e-2b1309709ac4] Running
	I1026 15:11:15.337356 1074625 system_pods.go:89] "etcd-no-preload-475081" [517b1527-7a3f-4127-8911-3d16611e2468] Running
	I1026 15:11:15.337363 1074625 system_pods.go:89] "kindnet-7cnvx" [e36c0ce9-7f97-4d93-a199-92e1d130eb0b] Running
	I1026 15:11:15.337367 1074625 system_pods.go:89] "kube-apiserver-no-preload-475081" [ee5497a8-ed40-496e-b36e-370bb14c3fad] Running
	I1026 15:11:15.337371 1074625 system_pods.go:89] "kube-controller-manager-no-preload-475081" [9bae4029-8dac-4406-83b1-6318c3ea749c] Running
	I1026 15:11:15.337374 1074625 system_pods.go:89] "kube-proxy-smtlg" [5b84f479-f0f8-4260-bd71-ce14b36bae0d] Running
	I1026 15:11:15.337377 1074625 system_pods.go:89] "kube-scheduler-no-preload-475081" [db077d81-a03f-4886-b004-749606cfcdca] Running
	I1026 15:11:15.337380 1074625 system_pods.go:89] "storage-provisioner" [15518fa4-cf2c-44fe-8b16-e222dcbae51f] Running
	I1026 15:11:15.337388 1074625 system_pods.go:126] duration metric: took 696.20329ms to wait for k8s-apps to be running ...
	I1026 15:11:15.337395 1074625 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:11:15.337453 1074625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:11:15.351058 1074625 system_svc.go:56] duration metric: took 13.652446ms WaitForService to wait for kubelet
	I1026 15:11:15.351086 1074625 kubeadm.go:586] duration metric: took 13.573820317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:11:15.351104 1074625 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:11:15.353841 1074625 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:11:15.353865 1074625 node_conditions.go:123] node cpu capacity is 8
	I1026 15:11:15.353889 1074625 node_conditions.go:105] duration metric: took 2.780465ms to run NodePressure ...
	I1026 15:11:15.353901 1074625 start.go:241] waiting for startup goroutines ...
	I1026 15:11:15.353910 1074625 start.go:246] waiting for cluster config update ...
	I1026 15:11:15.353922 1074625 start.go:255] writing updated cluster config ...
	I1026 15:11:15.354188 1074625 ssh_runner.go:195] Run: rm -f paused
	I1026 15:11:15.358267 1074625 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:15.361450 1074625 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-knr22" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.365648 1074625 pod_ready.go:94] pod "coredns-66bc5c9577-knr22" is "Ready"
	I1026 15:11:15.365671 1074625 pod_ready.go:86] duration metric: took 4.19882ms for pod "coredns-66bc5c9577-knr22" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.367814 1074625 pod_ready.go:83] waiting for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.371397 1074625 pod_ready.go:94] pod "etcd-no-preload-475081" is "Ready"
	I1026 15:11:15.371416 1074625 pod_ready.go:86] duration metric: took 3.581783ms for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.373200 1074625 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.376591 1074625 pod_ready.go:94] pod "kube-apiserver-no-preload-475081" is "Ready"
	I1026 15:11:15.376613 1074625 pod_ready.go:86] duration metric: took 3.391538ms for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.378391 1074625 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.762605 1074625 pod_ready.go:94] pod "kube-controller-manager-no-preload-475081" is "Ready"
	I1026 15:11:15.762631 1074625 pod_ready.go:86] duration metric: took 384.221212ms for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:15.962635 1074625 pod_ready.go:83] waiting for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:16.362127 1074625 pod_ready.go:94] pod "kube-proxy-smtlg" is "Ready"
	I1026 15:11:16.362153 1074625 pod_ready.go:86] duration metric: took 399.494041ms for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:16.562828 1074625 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:16.962134 1074625 pod_ready.go:94] pod "kube-scheduler-no-preload-475081" is "Ready"
	I1026 15:11:16.962197 1074625 pod_ready.go:86] duration metric: took 399.305825ms for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:11:16.962218 1074625 pod_ready.go:40] duration metric: took 1.603926195s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:11:17.009485 1074625 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:11:17.011155 1074625 out.go:179] * Done! kubectl is now configured to use "no-preload-475081" cluster and "default" namespace by default
	I1026 15:11:17.782637 1030092 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:11:17.783126 1030092 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1026 15:11:17.783227 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:11:17.783287 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:11:17.813444 1030092 cri.go:89] found id: "0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:17.813471 1030092 cri.go:89] found id: ""
	I1026 15:11:17.813482 1030092 logs.go:282] 1 containers: [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703]
	I1026 15:11:17.813536 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:17.817877 1030092 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:11:17.817943 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:11:17.846309 1030092 cri.go:89] found id: ""
	I1026 15:11:17.846338 1030092 logs.go:282] 0 containers: []
	W1026 15:11:17.846350 1030092 logs.go:284] No container was found matching "etcd"
	I1026 15:11:17.846358 1030092 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:11:17.846423 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:11:17.875481 1030092 cri.go:89] found id: ""
	I1026 15:11:17.875507 1030092 logs.go:282] 0 containers: []
	W1026 15:11:17.875514 1030092 logs.go:284] No container was found matching "coredns"
	I1026 15:11:17.875520 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:11:17.875577 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:11:17.904440 1030092 cri.go:89] found id: "933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:17.904467 1030092 cri.go:89] found id: ""
	I1026 15:11:17.904478 1030092 logs.go:282] 1 containers: [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e]
	I1026 15:11:17.904531 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:17.908676 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:11:17.908750 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:11:17.936303 1030092 cri.go:89] found id: ""
	I1026 15:11:17.936332 1030092 logs.go:282] 0 containers: []
	W1026 15:11:17.936340 1030092 logs.go:284] No container was found matching "kube-proxy"
	I1026 15:11:17.936347 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:11:17.936407 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:11:17.963974 1030092 cri.go:89] found id: "51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:17.964004 1030092 cri.go:89] found id: ""
	I1026 15:11:17.964016 1030092 logs.go:282] 1 containers: [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b]
	I1026 15:11:17.964084 1030092 ssh_runner.go:195] Run: which crictl
	I1026 15:11:17.968123 1030092 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:11:17.968193 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:11:17.996822 1030092 cri.go:89] found id: ""
	I1026 15:11:17.996846 1030092 logs.go:282] 0 containers: []
	W1026 15:11:17.996855 1030092 logs.go:284] No container was found matching "kindnet"
	I1026 15:11:17.996861 1030092 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 15:11:17.996916 1030092 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 15:11:18.027216 1030092 cri.go:89] found id: ""
	I1026 15:11:18.027246 1030092 logs.go:282] 0 containers: []
	W1026 15:11:18.027257 1030092 logs.go:284] No container was found matching "storage-provisioner"
	I1026 15:11:18.027270 1030092 logs.go:123] Gathering logs for dmesg ...
	I1026 15:11:18.027292 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:11:18.049719 1030092 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:11:18.049761 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:11:18.117642 1030092 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:11:18.117668 1030092 logs.go:123] Gathering logs for kube-apiserver [0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703] ...
	I1026 15:11:18.117686 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0eaf958f423190720c31fb3e79ace3a05563a001e4e0400987bd4ed0ef783703"
	I1026 15:11:18.156354 1030092 logs.go:123] Gathering logs for kube-scheduler [933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e] ...
	I1026 15:11:18.156388 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 933b76c9878d16c7b4da74cd5665e9c51b4d7f32726307ce6dd416bfdf677c8e"
	I1026 15:11:18.211792 1030092 logs.go:123] Gathering logs for kube-controller-manager [51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b] ...
	I1026 15:11:18.211836 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 51a6c66744b1eda9f5b8bddc6e44794d656aa3f623fa6b2df996290205b0428b"
	I1026 15:11:18.243265 1030092 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:11:18.243300 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:11:18.298329 1030092 logs.go:123] Gathering logs for container status ...
	I1026 15:11:18.298385 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:11:18.330819 1030092 logs.go:123] Gathering logs for kubelet ...
	I1026 15:11:18.330849 1030092 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Oct 26 15:11:14 no-preload-475081 crio[772]: time="2025-10-26T15:11:14.607562961Z" level=info msg="Starting container: 2d889ad24858b8b809e20434fd5b983d1ac7eba24616a1deaff342a91c879c84" id=2277440d-0a43-40ba-97ee-3ae35cff0be5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:11:14 no-preload-475081 crio[772]: time="2025-10-26T15:11:14.610294157Z" level=info msg="Started container" PID=2892 containerID=2d889ad24858b8b809e20434fd5b983d1ac7eba24616a1deaff342a91c879c84 description=kube-system/coredns-66bc5c9577-knr22/coredns id=2277440d-0a43-40ba-97ee-3ae35cff0be5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec29659d7264d0a952a32dc17fbfca3954c294eb7af2912065683b078b2d5c66
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.46300756Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d97b7939-0be0-4a5c-84b9-ff4a06b53257 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.463200944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.469620022Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f639e796e37ba1144942c1a7a7ad6ea6116d1eef2a26c7cab4f836af31b2ca39 UID:fa6c47a1-6c0a-41c3-a288-0ec79f76a4ba NetNS:/var/run/netns/6aa0299a-9a82-48e7-af18-bc523a8d706b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008adf0}] Aliases:map[]}"
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.469659844Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.480300882Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f639e796e37ba1144942c1a7a7ad6ea6116d1eef2a26c7cab4f836af31b2ca39 UID:fa6c47a1-6c0a-41c3-a288-0ec79f76a4ba NetNS:/var/run/netns/6aa0299a-9a82-48e7-af18-bc523a8d706b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008adf0}] Aliases:map[]}"
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.480479614Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.481354887Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.482406849Z" level=info msg="Ran pod sandbox f639e796e37ba1144942c1a7a7ad6ea6116d1eef2a26c7cab4f836af31b2ca39 with infra container: default/busybox/POD" id=d97b7939-0be0-4a5c-84b9-ff4a06b53257 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.483676969Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ef46f023-8b5f-43d1-9b69-cce60ec1df57 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.483780028Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ef46f023-8b5f-43d1-9b69-cce60ec1df57 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.48380799Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ef46f023-8b5f-43d1-9b69-cce60ec1df57 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.484399705Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4e494eef-36c1-45b6-ad84-5e0f53dcc4e7 name=/runtime.v1.ImageService/PullImage
	Oct 26 15:11:17 no-preload-475081 crio[772]: time="2025-10-26T15:11:17.485822822Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.18209079Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4e494eef-36c1-45b6-ad84-5e0f53dcc4e7 name=/runtime.v1.ImageService/PullImage
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.182737964Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=54144655-ed14-4d06-b396-9dfd5bab2cec name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.184483983Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f73278f0-b7ba-4368-97f8-bcfc1e59dd30 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.188040705Z" level=info msg="Creating container: default/busybox/busybox" id=9290b078-f685-4d05-9875-dd7eba171642 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.188215269Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.191859802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.192398821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.222025358Z" level=info msg="Created container 821a97cfde9196095279a8dbe55965f6d3f838d87cf0cc07fc26721d3aaea417: default/busybox/busybox" id=9290b078-f685-4d05-9875-dd7eba171642 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.222773074Z" level=info msg="Starting container: 821a97cfde9196095279a8dbe55965f6d3f838d87cf0cc07fc26721d3aaea417" id=2baa9982-60e1-4ca4-ae26-f2c7b3967162 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:11:18 no-preload-475081 crio[772]: time="2025-10-26T15:11:18.225007041Z" level=info msg="Started container" PID=2965 containerID=821a97cfde9196095279a8dbe55965f6d3f838d87cf0cc07fc26721d3aaea417 description=default/busybox/busybox id=2baa9982-60e1-4ca4-ae26-f2c7b3967162 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f639e796e37ba1144942c1a7a7ad6ea6116d1eef2a26c7cab4f836af31b2ca39
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	821a97cfde919       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   f639e796e37ba       busybox                                     default
	2d889ad24858b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   ec29659d7264d       coredns-66bc5c9577-knr22                    kube-system
	4116f82c92ee5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   799fd8cd4015d       storage-provisioner                         kube-system
	3edb601cc305a       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   cf5da8743c8e0       kindnet-7cnvx                               kube-system
	5cf340bd40fc8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   ff4a84c89811f       kube-proxy-smtlg                            kube-system
	0252ed738a21a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   df148a2e9b020       kube-scheduler-no-preload-475081            kube-system
	770b98cfe133e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   10537c077971e       kube-apiserver-no-preload-475081            kube-system
	c113a18e06f3b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   dca7e5d2c7537       kube-controller-manager-no-preload-475081   kube-system
	b0c7ef93f6f0d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   ddac5973dcbfc       etcd-no-preload-475081                      kube-system
	
	
	==> coredns [2d889ad24858b8b809e20434fd5b983d1ac7eba24616a1deaff342a91c879c84] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39073 - 64113 "HINFO IN 3157165701712862950.227862131519849646. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.103261496s
	
	
	==> describe nodes <==
	Name:               no-preload-475081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-475081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=no-preload-475081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_10_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:10:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-475081
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:11:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:11:16 +0000   Sun, 26 Oct 2025 15:10:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:11:16 +0000   Sun, 26 Oct 2025 15:10:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:11:16 +0000   Sun, 26 Oct 2025 15:10:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:11:16 +0000   Sun, 26 Oct 2025 15:11:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-475081
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                27d383f0-839c-47db-b23d-2fb7490add92
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-66bc5c9577-knr22                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-no-preload-475081                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-7cnvx                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-no-preload-475081             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-no-preload-475081    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-smtlg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-no-preload-475081             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node no-preload-475081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node no-preload-475081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node no-preload-475081 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node no-preload-475081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node no-preload-475081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node no-preload-475081 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node no-preload-475081 event: Registered Node no-preload-475081 in Controller
	  Normal  NodeReady                11s                kubelet          Node no-preload-475081 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [b0c7ef93f6f0dc0949ae351ff8affa4df3893310d5ebb1075aebf9301e5a889e] <==
	{"level":"warn","ts":"2025-10-26T15:10:52.903841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.916490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.926664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.942508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.952586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.959528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.966993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.974094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.981391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.990441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:52.999308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.005972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.012988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.019017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.026095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.032694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.040211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.048528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.055376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.062653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.069533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.076259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.091238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.098387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:10:53.106819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57562","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:11:25 up  2:53,  0 user,  load average: 2.75, 2.48, 1.63
	Linux no-preload-475081 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3edb601cc305a33788cf4bce6289f6445cd26ee883289528c2d5dff60d4fc8c6] <==
	I1026 15:11:03.681109       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:11:03.681419       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 15:11:03.681578       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:11:03.681596       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:11:03.681622       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:11:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:11:03.886936       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:11:03.887026       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:11:03.887039       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:11:03.977829       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:11:04.187490       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:11:04.187514       1 metrics.go:72] Registering metrics
	I1026 15:11:04.187554       1 controller.go:711] "Syncing nftables rules"
	I1026 15:11:13.891248       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:11:13.891311       1 main.go:301] handling current node
	I1026 15:11:23.887081       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:11:23.887146       1 main.go:301] handling current node
	
	
	==> kube-apiserver [770b98cfe133e8006574c611566657d462a11b08eee6815bed2b400ef4e76f19] <==
	I1026 15:10:53.666651       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:10:53.667942       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:10:53.671977       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:10:53.672129       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:10:53.677946       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:10:53.678566       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:10:53.863303       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:10:54.590242       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:10:54.629306       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:10:54.629327       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:10:55.136445       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:10:55.180303       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:10:55.276279       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:10:55.282590       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1026 15:10:55.283754       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:10:55.288084       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:10:55.607448       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:10:56.146108       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:10:56.155404       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:10:56.163410       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:11:01.361309       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:11:01.365560       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:11:01.460052       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:11:01.660546       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1026 15:11:24.260074       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:37648: use of closed network connection
	
	
	==> kube-controller-manager [c113a18e06f3bdcafa8ed5ef38660a6d38d8579bf7122b6a9826d337bf66df98] <==
	I1026 15:11:00.605603       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:11:00.605628       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:11:00.605636       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:11:00.607020       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:11:00.607046       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:11:00.607104       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:11:00.607141       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:11:00.607195       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 15:11:00.607323       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:11:00.607403       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 15:11:00.607725       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:11:00.608379       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:11:00.608400       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:11:00.608523       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 15:11:00.609418       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:11:00.610818       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 15:11:00.612096       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 15:11:00.612135       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 15:11:00.612210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 15:11:00.612260       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:11:00.614372       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:11:00.615546       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:11:00.619718       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 15:11:00.626049       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:11:15.557973       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5cf340bd40fc881a165c35161491bdd6e6f3ebc0f613e02e47123f7b001073fb] <==
	I1026 15:11:02.066239       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:11:02.128524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:11:02.229468       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:11:02.229509       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 15:11:02.229617       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:11:02.248079       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:11:02.248149       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:11:02.253962       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:11:02.254443       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:11:02.254488       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:11:02.258905       1 config.go:200] "Starting service config controller"
	I1026 15:11:02.258991       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:11:02.259347       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:11:02.259420       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:11:02.259353       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:11:02.259483       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:11:02.259852       1 config.go:309] "Starting node config controller"
	I1026 15:11:02.259878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:11:02.259886       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:11:02.359549       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:11:02.359645       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:11:02.359643       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0252ed738a21a1a443fc2ea83d07a6b83b3d600d374c4b203b164280af12a027] <==
	E1026 15:10:53.884817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:10:53.885091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:10:53.885257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:10:53.885551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:10:53.885543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:10:53.885582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:10:53.885628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:10:53.885764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:10:53.885848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:10:53.885761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:10:53.885894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:10:53.886006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:10:53.885470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:10:53.886067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:10:53.886105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:10:53.886124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:10:54.738349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:10:54.744533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:10:54.817131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:10:54.826235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:10:54.833519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:10:54.903847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:10:54.931135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:10:54.947458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1026 15:10:57.379825       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:10:57 no-preload-475081 kubelet[2285]: I1026 15:10:57.073728    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-475081" podStartSLOduration=1.073705064 podStartE2EDuration="1.073705064s" podCreationTimestamp="2025-10-26 15:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:10:57.062327758 +0000 UTC m=+1.161023226" watchObservedRunningTime="2025-10-26 15:10:57.073705064 +0000 UTC m=+1.172400538"
	Oct 26 15:10:57 no-preload-475081 kubelet[2285]: I1026 15:10:57.087530    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-475081" podStartSLOduration=1.087511024 podStartE2EDuration="1.087511024s" podCreationTimestamp="2025-10-26 15:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:10:57.073916701 +0000 UTC m=+1.172612166" watchObservedRunningTime="2025-10-26 15:10:57.087511024 +0000 UTC m=+1.186206516"
	Oct 26 15:10:57 no-preload-475081 kubelet[2285]: I1026 15:10:57.087673    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-475081" podStartSLOduration=1.087666057 podStartE2EDuration="1.087666057s" podCreationTimestamp="2025-10-26 15:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:10:57.087310636 +0000 UTC m=+1.186006109" watchObservedRunningTime="2025-10-26 15:10:57.087666057 +0000 UTC m=+1.186361530"
	Oct 26 15:10:57 no-preload-475081 kubelet[2285]: I1026 15:10:57.117968    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-475081" podStartSLOduration=1.117949614 podStartE2EDuration="1.117949614s" podCreationTimestamp="2025-10-26 15:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:10:57.101352604 +0000 UTC m=+1.200048078" watchObservedRunningTime="2025-10-26 15:10:57.117949614 +0000 UTC m=+1.216645087"
	Oct 26 15:11:00 no-preload-475081 kubelet[2285]: I1026 15:11:00.631915    2285 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 15:11:00 no-preload-475081 kubelet[2285]: I1026 15:11:00.632756    2285 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 15:11:01 no-preload-475081 kubelet[2285]: I1026 15:11:01.517614    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e36c0ce9-7f97-4d93-a199-92e1d130eb0b-xtables-lock\") pod \"kindnet-7cnvx\" (UID: \"e36c0ce9-7f97-4d93-a199-92e1d130eb0b\") " pod="kube-system/kindnet-7cnvx"
	Oct 26 15:11:01 no-preload-475081 kubelet[2285]: I1026 15:11:01.517667    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhrsb\" (UniqueName: \"kubernetes.io/projected/e36c0ce9-7f97-4d93-a199-92e1d130eb0b-kube-api-access-dhrsb\") pod \"kindnet-7cnvx\" (UID: \"e36c0ce9-7f97-4d93-a199-92e1d130eb0b\") " pod="kube-system/kindnet-7cnvx"
	Oct 26 15:11:01 no-preload-475081 kubelet[2285]: I1026 15:11:01.517697    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b84f479-f0f8-4260-bd71-ce14b36bae0d-xtables-lock\") pod \"kube-proxy-smtlg\" (UID: \"5b84f479-f0f8-4260-bd71-ce14b36bae0d\") " pod="kube-system/kube-proxy-smtlg"
	Oct 26 15:11:01 no-preload-475081 kubelet[2285]: I1026 15:11:01.517766    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b84f479-f0f8-4260-bd71-ce14b36bae0d-lib-modules\") pod \"kube-proxy-smtlg\" (UID: \"5b84f479-f0f8-4260-bd71-ce14b36bae0d\") " pod="kube-system/kube-proxy-smtlg"
	Oct 26 15:11:01 no-preload-475081 kubelet[2285]: I1026 15:11:01.517821    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e36c0ce9-7f97-4d93-a199-92e1d130eb0b-cni-cfg\") pod \"kindnet-7cnvx\" (UID: \"e36c0ce9-7f97-4d93-a199-92e1d130eb0b\") " pod="kube-system/kindnet-7cnvx"
	Oct 26 15:11:01 no-preload-475081 kubelet[2285]: I1026 15:11:01.517852    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e36c0ce9-7f97-4d93-a199-92e1d130eb0b-lib-modules\") pod \"kindnet-7cnvx\" (UID: \"e36c0ce9-7f97-4d93-a199-92e1d130eb0b\") " pod="kube-system/kindnet-7cnvx"
	Oct 26 15:11:01 no-preload-475081 kubelet[2285]: I1026 15:11:01.517879    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbx2m\" (UniqueName: \"kubernetes.io/projected/5b84f479-f0f8-4260-bd71-ce14b36bae0d-kube-api-access-dbx2m\") pod \"kube-proxy-smtlg\" (UID: \"5b84f479-f0f8-4260-bd71-ce14b36bae0d\") " pod="kube-system/kube-proxy-smtlg"
	Oct 26 15:11:01 no-preload-475081 kubelet[2285]: I1026 15:11:01.517910    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b84f479-f0f8-4260-bd71-ce14b36bae0d-kube-proxy\") pod \"kube-proxy-smtlg\" (UID: \"5b84f479-f0f8-4260-bd71-ce14b36bae0d\") " pod="kube-system/kube-proxy-smtlg"
	Oct 26 15:11:04 no-preload-475081 kubelet[2285]: I1026 15:11:04.049847    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-smtlg" podStartSLOduration=3.049821212 podStartE2EDuration="3.049821212s" podCreationTimestamp="2025-10-26 15:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:11:02.050705718 +0000 UTC m=+6.149401191" watchObservedRunningTime="2025-10-26 15:11:04.049821212 +0000 UTC m=+8.148516687"
	Oct 26 15:11:04 no-preload-475081 kubelet[2285]: I1026 15:11:04.445773    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7cnvx" podStartSLOduration=1.727714653 podStartE2EDuration="3.445745956s" podCreationTimestamp="2025-10-26 15:11:01 +0000 UTC" firstStartedPulling="2025-10-26 15:11:01.802474545 +0000 UTC m=+5.901170009" lastFinishedPulling="2025-10-26 15:11:03.520505838 +0000 UTC m=+7.619201312" observedRunningTime="2025-10-26 15:11:04.05019571 +0000 UTC m=+8.148891181" watchObservedRunningTime="2025-10-26 15:11:04.445745956 +0000 UTC m=+8.544441428"
	Oct 26 15:11:14 no-preload-475081 kubelet[2285]: I1026 15:11:14.226712    2285 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 15:11:14 no-preload-475081 kubelet[2285]: I1026 15:11:14.314438    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ba1a7ff-bfea-43bf-b65e-2b1309709ac4-config-volume\") pod \"coredns-66bc5c9577-knr22\" (UID: \"4ba1a7ff-bfea-43bf-b65e-2b1309709ac4\") " pod="kube-system/coredns-66bc5c9577-knr22"
	Oct 26 15:11:14 no-preload-475081 kubelet[2285]: I1026 15:11:14.314486    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15518fa4-cf2c-44fe-8b16-e222dcbae51f-tmp\") pod \"storage-provisioner\" (UID: \"15518fa4-cf2c-44fe-8b16-e222dcbae51f\") " pod="kube-system/storage-provisioner"
	Oct 26 15:11:14 no-preload-475081 kubelet[2285]: I1026 15:11:14.314506    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjh9\" (UniqueName: \"kubernetes.io/projected/15518fa4-cf2c-44fe-8b16-e222dcbae51f-kube-api-access-gqjh9\") pod \"storage-provisioner\" (UID: \"15518fa4-cf2c-44fe-8b16-e222dcbae51f\") " pod="kube-system/storage-provisioner"
	Oct 26 15:11:14 no-preload-475081 kubelet[2285]: I1026 15:11:14.314571    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbbq5\" (UniqueName: \"kubernetes.io/projected/4ba1a7ff-bfea-43bf-b65e-2b1309709ac4-kube-api-access-lbbq5\") pod \"coredns-66bc5c9577-knr22\" (UID: \"4ba1a7ff-bfea-43bf-b65e-2b1309709ac4\") " pod="kube-system/coredns-66bc5c9577-knr22"
	Oct 26 15:11:15 no-preload-475081 kubelet[2285]: I1026 15:11:15.086739    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.086713357 podStartE2EDuration="13.086713357s" podCreationTimestamp="2025-10-26 15:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:11:15.075610722 +0000 UTC m=+19.174306198" watchObservedRunningTime="2025-10-26 15:11:15.086713357 +0000 UTC m=+19.185408830"
	Oct 26 15:11:17 no-preload-475081 kubelet[2285]: I1026 15:11:17.156053    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-knr22" podStartSLOduration=16.156026748 podStartE2EDuration="16.156026748s" podCreationTimestamp="2025-10-26 15:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:11:15.086832826 +0000 UTC m=+19.185528300" watchObservedRunningTime="2025-10-26 15:11:17.156026748 +0000 UTC m=+21.254722215"
	Oct 26 15:11:17 no-preload-475081 kubelet[2285]: I1026 15:11:17.233609    2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbcq\" (UniqueName: \"kubernetes.io/projected/fa6c47a1-6c0a-41c3-a288-0ec79f76a4ba-kube-api-access-5cbcq\") pod \"busybox\" (UID: \"fa6c47a1-6c0a-41c3-a288-0ec79f76a4ba\") " pod="default/busybox"
	Oct 26 15:11:19 no-preload-475081 kubelet[2285]: I1026 15:11:19.089688    2285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.389848162 podStartE2EDuration="2.089663636s" podCreationTimestamp="2025-10-26 15:11:17 +0000 UTC" firstStartedPulling="2025-10-26 15:11:17.484008613 +0000 UTC m=+21.582704064" lastFinishedPulling="2025-10-26 15:11:18.183824072 +0000 UTC m=+22.282519538" observedRunningTime="2025-10-26 15:11:19.089576877 +0000 UTC m=+23.188272350" watchObservedRunningTime="2025-10-26 15:11:19.089663636 +0000 UTC m=+23.188359109"
	
	
	==> storage-provisioner [4116f82c92ee56bda8e02a27b78165ad65c346fe602330d183a970667b10ea24] <==
	I1026 15:11:14.615462       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:11:14.624632       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:11:14.624698       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:11:14.627761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:14.634724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:11:14.634991       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:11:14.635238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-475081_2bd70316-0bbf-4af3-bb97-f676ac271b51!
	I1026 15:11:14.635721       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a47d1b1-8ba0-4362-958c-984ac082c96f", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-475081_2bd70316-0bbf-4af3-bb97-f676ac271b51 became leader
	W1026 15:11:14.637807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:14.642049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:11:14.735668       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-475081_2bd70316-0bbf-4af3-bb97-f676ac271b51!
	W1026 15:11:16.646016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:16.650630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:18.653520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:18.657378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:20.660669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:20.665780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:22.668386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:22.672272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:24.676054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:11:24.681129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-475081 -n no-preload-475081
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-475081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-330914 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-330914 --alsologtostderr -v=1: exit status 80 (2.522164667s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-330914 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:12:36.726311 1097324 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:12:36.726591 1097324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:36.726601 1097324 out.go:374] Setting ErrFile to fd 2...
	I1026 15:12:36.726606 1097324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:36.726896 1097324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:12:36.727209 1097324 out.go:368] Setting JSON to false
	I1026 15:12:36.727273 1097324 mustload.go:65] Loading cluster: old-k8s-version-330914
	I1026 15:12:36.727663 1097324 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:12:36.728113 1097324 cli_runner.go:164] Run: docker container inspect old-k8s-version-330914 --format={{.State.Status}}
	I1026 15:12:36.747621 1097324 host.go:66] Checking if "old-k8s-version-330914" exists ...
	I1026 15:12:36.747932 1097324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:36.810406 1097324 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-26 15:12:36.799820581 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:36.811319 1097324 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-330914 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:12:36.813263 1097324 out.go:179] * Pausing node old-k8s-version-330914 ... 
	I1026 15:12:36.814575 1097324 host.go:66] Checking if "old-k8s-version-330914" exists ...
	I1026 15:12:36.814934 1097324 ssh_runner.go:195] Run: systemctl --version
	I1026 15:12:36.814982 1097324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-330914
	I1026 15:12:36.835171 1097324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33832 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/old-k8s-version-330914/id_rsa Username:docker}
	I1026 15:12:36.938515 1097324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:12:36.952268 1097324 pause.go:52] kubelet running: true
	I1026 15:12:36.952352 1097324 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:12:37.120430 1097324 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:12:37.120520 1097324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:12:37.191975 1097324 cri.go:89] found id: "72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a"
	I1026 15:12:37.191999 1097324 cri.go:89] found id: "b9b6726cc13f8a84b43e30b07c19acad2e63b4378a8bf17b7d9363d787298f47"
	I1026 15:12:37.192006 1097324 cri.go:89] found id: "bcba52fd1283c6a8528b225e7149f8ad6f13d72ccdf6c221344f3d60fb7c2912"
	I1026 15:12:37.192011 1097324 cri.go:89] found id: "9d7f5b66a3f13ea53acbb40e7d705efc2a46e95c15e0215793c795a76ecbaef1"
	I1026 15:12:37.192015 1097324 cri.go:89] found id: "8d54d1c865642a190dabbe2a4e3938bf3b3c9343a8c8d4d402b72a694a82f3bc"
	I1026 15:12:37.192020 1097324 cri.go:89] found id: "57862b704429a1e7b57796a2620311a2e27ce616153a415ac4d41876a1582708"
	I1026 15:12:37.192023 1097324 cri.go:89] found id: "e7c9e2373d25df292a06c5e68b12ca31b0890e6f5f98c7704a6a20c7acce02f7"
	I1026 15:12:37.192028 1097324 cri.go:89] found id: "14610085016dbaf8341ce666f39a20518090a5e59a40d14c2f08730cc477f696"
	I1026 15:12:37.192032 1097324 cri.go:89] found id: "ebe6998e952fa61da87a8c37ca602b0f2ebdf5f7cf4025c9fd2507b770af8504"
	I1026 15:12:37.192043 1097324 cri.go:89] found id: "8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8"
	I1026 15:12:37.192048 1097324 cri.go:89] found id: "0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a"
	I1026 15:12:37.192052 1097324 cri.go:89] found id: ""
	I1026 15:12:37.192103 1097324 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:12:37.205508 1097324 retry.go:31] will retry after 173.307891ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:12:37Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:12:37.379944 1097324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:12:37.393781 1097324 pause.go:52] kubelet running: false
	I1026 15:12:37.393848 1097324 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:12:37.544143 1097324 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:12:37.544291 1097324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:12:37.615289 1097324 cri.go:89] found id: "72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a"
	I1026 15:12:37.615319 1097324 cri.go:89] found id: "b9b6726cc13f8a84b43e30b07c19acad2e63b4378a8bf17b7d9363d787298f47"
	I1026 15:12:37.615325 1097324 cri.go:89] found id: "bcba52fd1283c6a8528b225e7149f8ad6f13d72ccdf6c221344f3d60fb7c2912"
	I1026 15:12:37.615330 1097324 cri.go:89] found id: "9d7f5b66a3f13ea53acbb40e7d705efc2a46e95c15e0215793c795a76ecbaef1"
	I1026 15:12:37.615335 1097324 cri.go:89] found id: "8d54d1c865642a190dabbe2a4e3938bf3b3c9343a8c8d4d402b72a694a82f3bc"
	I1026 15:12:37.615340 1097324 cri.go:89] found id: "57862b704429a1e7b57796a2620311a2e27ce616153a415ac4d41876a1582708"
	I1026 15:12:37.615344 1097324 cri.go:89] found id: "e7c9e2373d25df292a06c5e68b12ca31b0890e6f5f98c7704a6a20c7acce02f7"
	I1026 15:12:37.615348 1097324 cri.go:89] found id: "14610085016dbaf8341ce666f39a20518090a5e59a40d14c2f08730cc477f696"
	I1026 15:12:37.615352 1097324 cri.go:89] found id: "ebe6998e952fa61da87a8c37ca602b0f2ebdf5f7cf4025c9fd2507b770af8504"
	I1026 15:12:37.615367 1097324 cri.go:89] found id: "8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8"
	I1026 15:12:37.615372 1097324 cri.go:89] found id: "0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a"
	I1026 15:12:37.615376 1097324 cri.go:89] found id: ""
	I1026 15:12:37.615423 1097324 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:12:37.628685 1097324 retry.go:31] will retry after 300.78316ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:12:37Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:12:37.930356 1097324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:12:37.945145 1097324 pause.go:52] kubelet running: false
	I1026 15:12:37.945231 1097324 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:12:38.092122 1097324 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:12:38.092226 1097324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:12:38.166537 1097324 cri.go:89] found id: "72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a"
	I1026 15:12:38.166560 1097324 cri.go:89] found id: "b9b6726cc13f8a84b43e30b07c19acad2e63b4378a8bf17b7d9363d787298f47"
	I1026 15:12:38.166564 1097324 cri.go:89] found id: "bcba52fd1283c6a8528b225e7149f8ad6f13d72ccdf6c221344f3d60fb7c2912"
	I1026 15:12:38.166567 1097324 cri.go:89] found id: "9d7f5b66a3f13ea53acbb40e7d705efc2a46e95c15e0215793c795a76ecbaef1"
	I1026 15:12:38.166569 1097324 cri.go:89] found id: "8d54d1c865642a190dabbe2a4e3938bf3b3c9343a8c8d4d402b72a694a82f3bc"
	I1026 15:12:38.166572 1097324 cri.go:89] found id: "57862b704429a1e7b57796a2620311a2e27ce616153a415ac4d41876a1582708"
	I1026 15:12:38.166575 1097324 cri.go:89] found id: "e7c9e2373d25df292a06c5e68b12ca31b0890e6f5f98c7704a6a20c7acce02f7"
	I1026 15:12:38.166577 1097324 cri.go:89] found id: "14610085016dbaf8341ce666f39a20518090a5e59a40d14c2f08730cc477f696"
	I1026 15:12:38.166579 1097324 cri.go:89] found id: "ebe6998e952fa61da87a8c37ca602b0f2ebdf5f7cf4025c9fd2507b770af8504"
	I1026 15:12:38.166586 1097324 cri.go:89] found id: "8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8"
	I1026 15:12:38.166591 1097324 cri.go:89] found id: "0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a"
	I1026 15:12:38.166595 1097324 cri.go:89] found id: ""
	I1026 15:12:38.166652 1097324 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:12:38.179614 1097324 retry.go:31] will retry after 712.803333ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:12:38Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:12:38.893387 1097324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:12:38.921460 1097324 pause.go:52] kubelet running: false
	I1026 15:12:38.921529 1097324 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:12:39.083661 1097324 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:12:39.083779 1097324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:12:39.158872 1097324 cri.go:89] found id: "72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a"
	I1026 15:12:39.158901 1097324 cri.go:89] found id: "b9b6726cc13f8a84b43e30b07c19acad2e63b4378a8bf17b7d9363d787298f47"
	I1026 15:12:39.158908 1097324 cri.go:89] found id: "bcba52fd1283c6a8528b225e7149f8ad6f13d72ccdf6c221344f3d60fb7c2912"
	I1026 15:12:39.158912 1097324 cri.go:89] found id: "9d7f5b66a3f13ea53acbb40e7d705efc2a46e95c15e0215793c795a76ecbaef1"
	I1026 15:12:39.158916 1097324 cri.go:89] found id: "8d54d1c865642a190dabbe2a4e3938bf3b3c9343a8c8d4d402b72a694a82f3bc"
	I1026 15:12:39.158919 1097324 cri.go:89] found id: "57862b704429a1e7b57796a2620311a2e27ce616153a415ac4d41876a1582708"
	I1026 15:12:39.158923 1097324 cri.go:89] found id: "e7c9e2373d25df292a06c5e68b12ca31b0890e6f5f98c7704a6a20c7acce02f7"
	I1026 15:12:39.158926 1097324 cri.go:89] found id: "14610085016dbaf8341ce666f39a20518090a5e59a40d14c2f08730cc477f696"
	I1026 15:12:39.158930 1097324 cri.go:89] found id: "ebe6998e952fa61da87a8c37ca602b0f2ebdf5f7cf4025c9fd2507b770af8504"
	I1026 15:12:39.158949 1097324 cri.go:89] found id: "8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8"
	I1026 15:12:39.158953 1097324 cri.go:89] found id: "0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a"
	I1026 15:12:39.158956 1097324 cri.go:89] found id: ""
	I1026 15:12:39.159008 1097324 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:12:39.175778 1097324 out.go:203] 
	W1026 15:12:39.177200 1097324 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:12:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:12:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:12:39.177242 1097324 out.go:285] * 
	* 
	W1026 15:12:39.182749 1097324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:12:39.184217 1097324 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-330914 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-330914
helpers_test.go:243: (dbg) docker inspect old-k8s-version-330914:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe",
	        "Created": "2025-10-26T15:10:26.438664017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1086807,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:11:38.127982725Z",
	            "FinishedAt": "2025-10-26T15:11:37.229715977Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/hosts",
	        "LogPath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe-json.log",
	        "Name": "/old-k8s-version-330914",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-330914:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-330914",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe",
	                "LowerDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-330914",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-330914/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-330914",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-330914",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-330914",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bf325b359f01d99f861ac20000363893f8802fb28f33bafd4d0f7af6c69650a4",
	            "SandboxKey": "/var/run/docker/netns/bf325b359f01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-330914": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:3d:6e:c3:3e:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56ce3fb526f5012c2231b9293c9ece449bc551903b4972b11997763e4592ce3f",
	                    "EndpointID": "415724bd1a1f64b6c859cc16e71f69fd10cc9d856e62d55af7b9efbbf1ee7731",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-330914",
	                        "91ae2e5aad34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330914 -n old-k8s-version-330914
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330914 -n old-k8s-version-330914: exit status 2 (362.90505ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-330914 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-330914 logs -n 25: (1.343189282s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-391593                                                                                                                                                                                                                  │ force-systemd-flag-391593 │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p cert-options-124833 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ stop    │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p NoKubernetes-917490 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ ssh     │ -p NoKubernetes-917490 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ delete  │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ ssh     │ cert-options-124833 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ ssh     │ -p cert-options-124833 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ delete  │ -p cert-options-124833                                                                                                                                                                                                                        │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-330914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ stop    │ -p old-k8s-version-330914 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-475081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ stop    │ -p no-preload-475081 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-330914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ addons  │ enable dashboard -p no-preload-475081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-176599 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-176599 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p kubernetes-upgrade-176599                                                                                                                                                                                                                  │ kubernetes-upgrade-176599 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-535130        │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ image   │ old-k8s-version-330914 image list --format=json                                                                                                                                                                                               │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p old-k8s-version-330914 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:12:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:12:22.723695 1094884 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:12:22.723977 1094884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:22.723989 1094884 out.go:374] Setting ErrFile to fd 2...
	I1026 15:12:22.723995 1094884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:22.724291 1094884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:12:22.724794 1094884 out.go:368] Setting JSON to false
	I1026 15:12:22.726080 1094884 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10491,"bootTime":1761481052,"procs":413,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:12:22.726194 1094884 start.go:141] virtualization: kvm guest
	I1026 15:12:22.728318 1094884 out.go:179] * [embed-certs-535130] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:12:22.729604 1094884 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:12:22.729606 1094884 notify.go:220] Checking for updates...
	I1026 15:12:22.732660 1094884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:12:22.734078 1094884 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:12:22.735315 1094884 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:12:22.736302 1094884 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:12:22.737366 1094884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:12:22.738837 1094884 config.go:182] Loaded profile config "cert-expiration-619245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:22.738935 1094884 config.go:182] Loaded profile config "no-preload-475081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:22.739013 1094884 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:12:22.739113 1094884 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:12:22.764422 1094884 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:12:22.764534 1094884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:22.829223 1094884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-26 15:12:22.816741758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:22.829376 1094884 docker.go:318] overlay module found
	I1026 15:12:22.832034 1094884 out.go:179] * Using the docker driver based on user configuration
	W1026 15:12:18.001061 1086607 pod_ready.go:104] pod "coredns-5dd5756b68-hzjqn" is not "Ready", error: <nil>
	W1026 15:12:20.003024 1086607 pod_ready.go:104] pod "coredns-5dd5756b68-hzjqn" is not "Ready", error: <nil>
	W1026 15:12:22.003141 1086607 pod_ready.go:104] pod "coredns-5dd5756b68-hzjqn" is not "Ready", error: <nil>
	I1026 15:12:22.833219 1094884 start.go:305] selected driver: docker
	I1026 15:12:22.833236 1094884 start.go:925] validating driver "docker" against <nil>
	I1026 15:12:22.833255 1094884 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:12:22.833817 1094884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:22.893827 1094884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-26 15:12:22.883069758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:22.894093 1094884 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:12:22.894326 1094884 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:12:22.895696 1094884 out.go:179] * Using Docker driver with root privileges
	I1026 15:12:22.896861 1094884 cni.go:84] Creating CNI manager for ""
	I1026 15:12:22.896952 1094884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:22.896969 1094884 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:12:22.897079 1094884 start.go:349] cluster config:
	{Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:22.898546 1094884 out.go:179] * Starting "embed-certs-535130" primary control-plane node in "embed-certs-535130" cluster
	I1026 15:12:22.899674 1094884 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:12:22.900838 1094884 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:12:22.901910 1094884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:22.901967 1094884 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:12:22.901983 1094884 cache.go:58] Caching tarball of preloaded images
	I1026 15:12:22.902045 1094884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:12:22.902150 1094884 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:12:22.902201 1094884 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:12:22.902353 1094884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json ...
	I1026 15:12:22.902381 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json: {Name:mk12a66b75728d08ad27e4045a242e76128ff185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:22.925433 1094884 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:12:22.925455 1094884 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:12:22.925472 1094884 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:12:22.925507 1094884 start.go:360] acquireMachinesLock for embed-certs-535130: {Name:mk2308f6e6d84ecfdd2789c813704db715591895 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:12:22.925609 1094884 start.go:364] duration metric: took 84.211µs to acquireMachinesLock for "embed-certs-535130"
	I1026 15:12:22.925633 1094884 start.go:93] Provisioning new machine with config: &{Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:12:22.925700 1094884 start.go:125] createHost starting for "" (driver="docker")
	W1026 15:12:19.071838 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	W1026 15:12:21.570936 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	I1026 15:12:23.502675 1086607 pod_ready.go:94] pod "coredns-5dd5756b68-hzjqn" is "Ready"
	I1026 15:12:23.502703 1086607 pod_ready.go:86] duration metric: took 34.507438685s for pod "coredns-5dd5756b68-hzjqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.506504 1086607 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.511539 1086607 pod_ready.go:94] pod "etcd-old-k8s-version-330914" is "Ready"
	I1026 15:12:23.511569 1086607 pod_ready.go:86] duration metric: took 5.033388ms for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.515140 1086607 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.520139 1086607 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-330914" is "Ready"
	I1026 15:12:23.520198 1086607 pod_ready.go:86] duration metric: took 4.997939ms for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.523393 1086607 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.700379 1086607 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-330914" is "Ready"
	I1026 15:12:23.700409 1086607 pod_ready.go:86] duration metric: took 176.992551ms for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.900733 1086607 pod_ready.go:83] waiting for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.299617 1086607 pod_ready.go:94] pod "kube-proxy-829lp" is "Ready"
	I1026 15:12:24.299649 1086607 pod_ready.go:86] duration metric: took 398.889482ms for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.500562 1086607 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.900567 1086607 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-330914" is "Ready"
	I1026 15:12:24.900600 1086607 pod_ready.go:86] duration metric: took 400.008062ms for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.900617 1086607 pod_ready.go:40] duration metric: took 35.916930354s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:12:24.950321 1086607 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1026 15:12:24.955073 1086607 out.go:203] 
	W1026 15:12:24.956447 1086607 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:12:24.957576 1086607 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:12:24.958913 1086607 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-330914" cluster and "default" namespace by default
	I1026 15:12:22.927779 1094884 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:12:22.928008 1094884 start.go:159] libmachine.API.Create for "embed-certs-535130" (driver="docker")
	I1026 15:12:22.928043 1094884 client.go:168] LocalClient.Create starting
	I1026 15:12:22.928138 1094884 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:12:22.928224 1094884 main.go:141] libmachine: Decoding PEM data...
	I1026 15:12:22.928244 1094884 main.go:141] libmachine: Parsing certificate...
	I1026 15:12:22.928320 1094884 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:12:22.928345 1094884 main.go:141] libmachine: Decoding PEM data...
	I1026 15:12:22.928354 1094884 main.go:141] libmachine: Parsing certificate...
	I1026 15:12:22.928694 1094884 cli_runner.go:164] Run: docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:12:22.947434 1094884 cli_runner.go:211] docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:12:22.947544 1094884 network_create.go:284] running [docker network inspect embed-certs-535130] to gather additional debugging logs...
	I1026 15:12:22.947572 1094884 cli_runner.go:164] Run: docker network inspect embed-certs-535130
	W1026 15:12:22.965884 1094884 cli_runner.go:211] docker network inspect embed-certs-535130 returned with exit code 1
	I1026 15:12:22.965918 1094884 network_create.go:287] error running [docker network inspect embed-certs-535130]: docker network inspect embed-certs-535130: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-535130 not found
	I1026 15:12:22.965936 1094884 network_create.go:289] output of [docker network inspect embed-certs-535130]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-535130 not found
	
	** /stderr **
	I1026 15:12:22.966046 1094884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:12:22.985557 1094884 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:12:22.986359 1094884 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:12:22.987196 1094884 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:12:22.988126 1094884 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec6510}
	I1026 15:12:22.988153 1094884 network_create.go:124] attempt to create docker network embed-certs-535130 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 15:12:22.988258 1094884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-535130 embed-certs-535130
	I1026 15:12:23.053788 1094884 network_create.go:108] docker network embed-certs-535130 192.168.76.0/24 created
	I1026 15:12:23.053820 1094884 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-535130" container
	I1026 15:12:23.053922 1094884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:12:23.073511 1094884 cli_runner.go:164] Run: docker volume create embed-certs-535130 --label name.minikube.sigs.k8s.io=embed-certs-535130 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:12:23.092193 1094884 oci.go:103] Successfully created a docker volume embed-certs-535130
	I1026 15:12:23.092294 1094884 cli_runner.go:164] Run: docker run --rm --name embed-certs-535130-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-535130 --entrypoint /usr/bin/test -v embed-certs-535130:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:12:23.512406 1094884 oci.go:107] Successfully prepared a docker volume embed-certs-535130
	I1026 15:12:23.512440 1094884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:23.512464 1094884 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:12:23.512541 1094884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-535130:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1026 15:12:24.071766 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	W1026 15:12:26.570742 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	I1026 15:12:28.044544 1094884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-535130:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.531951929s)
	I1026 15:12:28.044587 1094884 kic.go:203] duration metric: took 4.532116219s to extract preloaded images to volume ...
	W1026 15:12:28.044702 1094884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:12:28.044786 1094884 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:12:28.044853 1094884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:12:28.105477 1094884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-535130 --name embed-certs-535130 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-535130 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-535130 --network embed-certs-535130 --ip 192.168.76.2 --volume embed-certs-535130:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:12:28.395695 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Running}}
	I1026 15:12:28.416487 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:12:28.437229 1094884 cli_runner.go:164] Run: docker exec embed-certs-535130 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:12:28.483324 1094884 oci.go:144] the created container "embed-certs-535130" has a running status.
	I1026 15:12:28.483369 1094884 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa...
	I1026 15:12:29.157005 1094884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:12:29.183422 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:12:29.201144 1094884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:12:29.201180 1094884 kic_runner.go:114] Args: [docker exec --privileged embed-certs-535130 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:12:29.249224 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:12:29.269108 1094884 machine.go:93] provisionDockerMachine start ...
	I1026 15:12:29.269252 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:29.287870 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.288147 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:29.288181 1094884 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:12:29.432484 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:12:29.432520 1094884 ubuntu.go:182] provisioning hostname "embed-certs-535130"
	I1026 15:12:29.432600 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:29.451595 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.451814 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:29.451827 1094884 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-535130 && echo "embed-certs-535130" | sudo tee /etc/hostname
	I1026 15:12:29.605852 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:12:29.605944 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:29.625782 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.626088 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:29.626119 1094884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-535130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-535130/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-535130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:12:29.770338 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:12:29.770375 1094884 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:12:29.770428 1094884 ubuntu.go:190] setting up certificates
	I1026 15:12:29.770450 1094884 provision.go:84] configureAuth start
	I1026 15:12:29.770518 1094884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:12:29.789696 1094884 provision.go:143] copyHostCerts
	I1026 15:12:29.789762 1094884 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:12:29.789773 1094884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:12:29.789856 1094884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:12:29.789987 1094884 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:12:29.789999 1094884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:12:29.790049 1094884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:12:29.790145 1094884 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:12:29.790156 1094884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:12:29.790206 1094884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:12:29.790284 1094884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.embed-certs-535130 san=[127.0.0.1 192.168.76.2 embed-certs-535130 localhost minikube]
	I1026 15:12:30.082527 1094884 provision.go:177] copyRemoteCerts
	I1026 15:12:30.082582 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:12:30.082620 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.101581 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.204007 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:12:30.225022 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:12:30.242962 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:12:30.260627 1094884 provision.go:87] duration metric: took 490.157243ms to configureAuth
	I1026 15:12:30.260655 1094884 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:12:30.260857 1094884 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:30.260976 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.279328 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:30.279545 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:30.279561 1094884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:12:30.540929 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:12:30.540953 1094884 machine.go:96] duration metric: took 1.27182251s to provisionDockerMachine
	I1026 15:12:30.540967 1094884 client.go:171] duration metric: took 7.612915574s to LocalClient.Create
	I1026 15:12:30.540991 1094884 start.go:167] duration metric: took 7.612983362s to libmachine.API.Create "embed-certs-535130"
	I1026 15:12:30.541001 1094884 start.go:293] postStartSetup for "embed-certs-535130" (driver="docker")
	I1026 15:12:30.541015 1094884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:12:30.541083 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:12:30.541145 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.560194 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.666065 1094884 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:12:30.669831 1094884 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:12:30.669865 1094884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:12:30.669877 1094884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:12:30.669933 1094884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:12:30.670044 1094884 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:12:30.670157 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:12:30.678218 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:12:30.700030 1094884 start.go:296] duration metric: took 159.014656ms for postStartSetup
	I1026 15:12:30.700424 1094884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:12:30.720118 1094884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json ...
	I1026 15:12:30.720413 1094884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:12:30.720465 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.739104 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.837679 1094884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:12:30.842561 1094884 start.go:128] duration metric: took 7.916843227s to createHost
	I1026 15:12:30.842593 1094884 start.go:83] releasing machines lock for "embed-certs-535130", held for 7.916973049s
	I1026 15:12:30.842682 1094884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:12:30.861500 1094884 ssh_runner.go:195] Run: cat /version.json
	I1026 15:12:30.861556 1094884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:12:30.861562 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.861619 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.880085 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.880552 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:31.043055 1094884 ssh_runner.go:195] Run: systemctl --version
	I1026 15:12:31.050442 1094884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:12:31.091997 1094884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:12:31.097046 1094884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:12:31.097112 1094884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:12:31.124040 1094884 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:12:31.124067 1094884 start.go:495] detecting cgroup driver to use...
	I1026 15:12:31.124106 1094884 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:12:31.124152 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:12:31.143171 1094884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:12:31.157567 1094884 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:12:31.157636 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:12:31.175501 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:12:31.195107 1094884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:12:31.280916 1094884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:12:31.370324 1094884 docker.go:234] disabling docker service ...
	I1026 15:12:31.370389 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:12:31.391038 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:12:31.405225 1094884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:12:31.494860 1094884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:12:31.581190 1094884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:12:31.595100 1094884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:12:31.610576 1094884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:12:31.610643 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.621702 1094884 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:12:31.621772 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.631933 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.641706 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.652631 1094884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:12:31.662065 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.672261 1094884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.687254 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.697622 1094884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:12:31.705869 1094884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:12:31.714245 1094884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:12:31.797931 1094884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:12:31.907320 1094884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:12:31.907394 1094884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:12:31.911700 1094884 start.go:563] Will wait 60s for crictl version
	I1026 15:12:31.911755 1094884 ssh_runner.go:195] Run: which crictl
	I1026 15:12:31.916061 1094884 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:12:31.941571 1094884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:12:31.941644 1094884 ssh_runner.go:195] Run: crio --version
	I1026 15:12:31.971039 1094884 ssh_runner.go:195] Run: crio --version
	I1026 15:12:32.004653 1094884 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1026 15:12:28.572313 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	I1026 15:12:31.070880 1087870 pod_ready.go:94] pod "coredns-66bc5c9577-knr22" is "Ready"
	I1026 15:12:31.070915 1087870 pod_ready.go:86] duration metric: took 37.006499908s for pod "coredns-66bc5c9577-knr22" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.073905 1087870 pod_ready.go:83] waiting for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.078107 1087870 pod_ready.go:94] pod "etcd-no-preload-475081" is "Ready"
	I1026 15:12:31.078138 1087870 pod_ready.go:86] duration metric: took 4.207111ms for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.080180 1087870 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.084392 1087870 pod_ready.go:94] pod "kube-apiserver-no-preload-475081" is "Ready"
	I1026 15:12:31.084426 1087870 pod_ready.go:86] duration metric: took 4.226805ms for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.088708 1087870 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.269156 1087870 pod_ready.go:94] pod "kube-controller-manager-no-preload-475081" is "Ready"
	I1026 15:12:31.269206 1087870 pod_ready.go:86] duration metric: took 180.476065ms for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.468623 1087870 pod_ready.go:83] waiting for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.869293 1087870 pod_ready.go:94] pod "kube-proxy-smtlg" is "Ready"
	I1026 15:12:31.869330 1087870 pod_ready.go:86] duration metric: took 400.674816ms for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:32.068930 1087870 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:32.468962 1087870 pod_ready.go:94] pod "kube-scheduler-no-preload-475081" is "Ready"
	I1026 15:12:32.468992 1087870 pod_ready.go:86] duration metric: took 400.035699ms for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:32.469006 1087870 pod_ready.go:40] duration metric: took 38.40815001s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:12:32.526497 1087870 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:12:32.529763 1087870 out.go:179] * Done! kubectl is now configured to use "no-preload-475081" cluster and "default" namespace by default
	I1026 15:12:32.005880 1094884 cli_runner.go:164] Run: docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:12:32.024572 1094884 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:12:32.029206 1094884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:12:32.040870 1094884 kubeadm.go:883] updating cluster {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:12:32.041002 1094884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:32.041061 1094884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:12:32.075869 1094884 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:12:32.075897 1094884 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:12:32.075949 1094884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:12:32.102439 1094884 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:12:32.102468 1094884 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:12:32.102478 1094884 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:12:32.102571 1094884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-535130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:12:32.102633 1094884 ssh_runner.go:195] Run: crio config
	I1026 15:12:32.149754 1094884 cni.go:84] Creating CNI manager for ""
	I1026 15:12:32.149778 1094884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:32.149796 1094884 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:12:32.149823 1094884 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-535130 NodeName:embed-certs-535130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:12:32.149988 1094884 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-535130"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:12:32.150086 1094884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:12:32.158464 1094884 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:12:32.158526 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:12:32.166272 1094884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:12:32.179046 1094884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:12:32.195352 1094884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:12:32.209747 1094884 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:12:32.213887 1094884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:12:32.224809 1094884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:12:32.308443 1094884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:12:32.338158 1094884 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130 for IP: 192.168.76.2
	I1026 15:12:32.338213 1094884 certs.go:195] generating shared ca certs ...
	I1026 15:12:32.338238 1094884 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.338410 1094884 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:12:32.338458 1094884 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:12:32.338469 1094884 certs.go:257] generating profile certs ...
	I1026 15:12:32.338529 1094884 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key
	I1026 15:12:32.338550 1094884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.crt with IP's: []
	I1026 15:12:32.566180 1094884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.crt ...
	I1026 15:12:32.566211 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.crt: {Name:mkd6d336e91342a08904be85dabf843a66ea95b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.566384 1094884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key ...
	I1026 15:12:32.566397 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key: {Name:mk4416c5b817100d65b64e109f73505f873e43f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.566477 1094884 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3
	I1026 15:12:32.566499 1094884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 15:12:32.754452 1094884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3 ...
	I1026 15:12:32.754486 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3: {Name:mkabb7862e92bef693c45258c1617506096cdb12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.754719 1094884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3 ...
	I1026 15:12:32.754740 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3: {Name:mk215afc3790eeabca9034d99e286de6a2066abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.754854 1094884 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt
	I1026 15:12:32.755001 1094884 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key
	I1026 15:12:32.755099 1094884 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key
	I1026 15:12:32.755124 1094884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt with IP's: []
	I1026 15:12:33.207302 1094884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt ...
	I1026 15:12:33.207334 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt: {Name:mk113bd43484e2aa10efeeed24889f71d62785e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:33.207519 1094884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key ...
	I1026 15:12:33.207536 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key: {Name:mkfed9208b9b01aa68dc5edcf9bb22e51125ffb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:33.207753 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:12:33.207808 1094884 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:12:33.207819 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:12:33.207838 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:12:33.207860 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:12:33.207882 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:12:33.207921 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:12:33.208583 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:12:33.227595 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:12:33.245914 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:12:33.265321 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:12:33.285592 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:12:33.304704 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:12:33.323649 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:12:33.344374 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:12:33.364210 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:12:33.385157 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:12:33.404256 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:12:33.423316 1094884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:12:33.436366 1094884 ssh_runner.go:195] Run: openssl version
	I1026 15:12:33.442667 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:12:33.451487 1094884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:12:33.455627 1094884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:12:33.455683 1094884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:12:33.492656 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:12:33.502994 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:12:33.512795 1094884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:33.517090 1094884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:33.517196 1094884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:33.552991 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:12:33.563221 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:12:33.572631 1094884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:12:33.577148 1094884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:12:33.577254 1094884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:12:33.613124 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:12:33.622923 1094884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:12:33.627486 1094884 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:12:33.627550 1094884 kubeadm.go:400] StartCluster: {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:33.627624 1094884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:12:33.627672 1094884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:12:33.657036 1094884 cri.go:89] found id: ""
	I1026 15:12:33.657097 1094884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:12:33.665274 1094884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:12:33.673313 1094884 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:12:33.673363 1094884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:12:33.681347 1094884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:12:33.681368 1094884 kubeadm.go:157] found existing configuration files:
	
	I1026 15:12:33.681408 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:12:33.689860 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:12:33.689914 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:12:33.698191 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:12:33.706608 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:12:33.706671 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:12:33.715007 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:12:33.724484 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:12:33.724552 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:12:33.732614 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:12:33.740833 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:12:33.740886 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:12:33.748934 1094884 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:12:33.792631 1094884 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:12:33.792748 1094884 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:12:33.816843 1094884 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:12:33.816927 1094884 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:12:33.816973 1094884 kubeadm.go:318] OS: Linux
	I1026 15:12:33.817035 1094884 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:12:33.817132 1094884 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:12:33.817221 1094884 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:12:33.817300 1094884 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:12:33.817392 1094884 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:12:33.817469 1094884 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:12:33.817529 1094884 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:12:33.817610 1094884 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:12:33.878062 1094884 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:12:33.878236 1094884 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:12:33.878364 1094884 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:12:33.887538 1094884 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:12:33.889316 1094884 out.go:252]   - Generating certificates and keys ...
	I1026 15:12:33.889393 1094884 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:12:33.889456 1094884 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:12:34.292495 1094884 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:12:34.449436 1094884 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:12:34.657020 1094884 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:12:35.300215 1094884 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:12:35.661499 1094884 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:12:35.661692 1094884 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-535130 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:12:35.807387 1094884 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:12:35.807513 1094884 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-535130 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:12:35.865776 1094884 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:12:36.035254 1094884 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:12:36.141587 1094884 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:12:36.141681 1094884 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:12:36.336316 1094884 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:12:36.502661 1094884 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:12:37.100733 1094884 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:12:37.150513 1094884 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:12:37.345845 1094884 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:12:37.346412 1094884 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:12:37.350599 1094884 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:12:37.353330 1094884 out.go:252]   - Booting up control plane ...
	I1026 15:12:37.353462 1094884 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:12:37.353580 1094884 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:12:37.353685 1094884 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:12:37.367641 1094884 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:12:37.367803 1094884 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:12:37.375592 1094884 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:12:37.375779 1094884 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:12:37.375850 1094884 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:12:37.490942 1094884 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:12:37.491126 1094884 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 26 15:12:06 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:06.063199215Z" level=info msg="Created container 0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl/kubernetes-dashboard" id=99e9b325-8227-446f-a252-cb87389dd090 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:06 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:06.063865648Z" level=info msg="Starting container: 0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a" id=9cabf219-0463-4ee0-84b3-20d9456eeb56 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:06 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:06.065928639Z" level=info msg="Started container" PID=1718 containerID=0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl/kubernetes-dashboard id=9cabf219-0463-4ee0-84b3-20d9456eeb56 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6765c4ab322963bdcec7a093179bbeeb06478b5bc63a4ff4f37b5cca40f0a073
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.909281876Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a791a33c-68e1-4a00-b81e-6bd8deea0a01 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.910312241Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bcba90a4-788e-4576-a642-c335b7468756 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.911386375Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bf06df24-460f-474c-a749-3680f218f849 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.911537608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.916613338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.916822353Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b305f456c4a6f57fd692bfe6437ccb5f23c47e9adc146b7d038be455f9711236/merged/etc/passwd: no such file or directory"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.916864142Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b305f456c4a6f57fd692bfe6437ccb5f23c47e9adc146b7d038be455f9711236/merged/etc/group: no such file or directory"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.917155837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.945258918Z" level=info msg="Created container 72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a: kube-system/storage-provisioner/storage-provisioner" id=bf06df24-460f-474c-a749-3680f218f849 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.945881314Z" level=info msg="Starting container: 72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a" id=d3035a25-90ed-4e88-b688-f03b56d3f742 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.948025841Z" level=info msg="Started container" PID=1743 containerID=72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a description=kube-system/storage-provisioner/storage-provisioner id=d3035a25-90ed-4e88-b688-f03b56d3f742 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c99c15e32a41e5744ea1c95f57acac94ef55400972ac88f73f76fc9c6f91487
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.797656237Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4592450a-2e67-456e-8e86-4e2b60631252 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.798649291Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=599dfcb4-3f73-4c55-b936-978bbbfcc6ab name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.799804971Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz/dashboard-metrics-scraper" id=6b7e20c0-5241-4146-bf84-13d820bfafbe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.799935158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.807576027Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.808128093Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.83252353Z" level=info msg="Created container 8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz/dashboard-metrics-scraper" id=6b7e20c0-5241-4146-bf84-13d820bfafbe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.833251908Z" level=info msg="Starting container: 8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8" id=6aa4350f-bc21-4427-b0f4-18674c26cfbe name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.835547879Z" level=info msg="Started container" PID=1759 containerID=8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz/dashboard-metrics-scraper id=6aa4350f-bc21-4427-b0f4-18674c26cfbe name=/runtime.v1.RuntimeService/StartContainer sandboxID=1fbd4949787fb995ad3d2e337f7f197305cee0131daf9f37eae4fb808033d11f
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.920419101Z" level=info msg="Removing container: 455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9" id=333e6ed7-c380-4303-9f02-a41da5b27a67 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.931120505Z" level=info msg="Removed container 455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz/dashboard-metrics-scraper" id=333e6ed7-c380-4303-9f02-a41da5b27a67 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8a771a5866228       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   1fbd4949787fb       dashboard-metrics-scraper-5f989dc9cf-6g4cz       kubernetes-dashboard
	72d2bf4d87687       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   9c99c15e32a41       storage-provisioner                              kube-system
	0c24c2a5f615f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   34 seconds ago      Running             kubernetes-dashboard        0                   6765c4ab32296       kubernetes-dashboard-8694d4445c-bpdjl            kubernetes-dashboard
	b9b6726cc13f8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   ac3bb9b5e3857       coredns-5dd5756b68-hzjqn                         kube-system
	8c10864b97511       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   e37d7ef1b023b       busybox                                          default
	bcba52fd1283c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   18f01d150c8a7       kindnet-b8hhx                                    kube-system
	9d7f5b66a3f13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   9c99c15e32a41       storage-provisioner                              kube-system
	8d54d1c865642       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   ddc39a21c0216       kube-proxy-829lp                                 kube-system
	57862b704429a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   d550a78920fe8       kube-scheduler-old-k8s-version-330914            kube-system
	e7c9e2373d25d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   d7cb33b10ff0f       etcd-old-k8s-version-330914                      kube-system
	14610085016db       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   49cde6bc7729b       kube-controller-manager-old-k8s-version-330914   kube-system
	ebe6998e952fa       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   c694fbb72e963       kube-apiserver-old-k8s-version-330914            kube-system
	
	
	==> coredns [b9b6726cc13f8a84b43e30b07c19acad2e63b4378a8bf17b7d9363d787298f47] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:35576 - 37043 "HINFO IN 3461098357155546764.732809821893994727. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.076516391s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-330914
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-330914
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=old-k8s-version-330914
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_10_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:10:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-330914
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:12:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:12:18 +0000   Sun, 26 Oct 2025 15:10:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:12:18 +0000   Sun, 26 Oct 2025 15:10:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:12:18 +0000   Sun, 26 Oct 2025 15:10:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:12:18 +0000   Sun, 26 Oct 2025 15:11:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-330914
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7b3315c3-b9ce-4fbb-a096-582c49bc7b55
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-hzjqn                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-old-k8s-version-330914                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-b8hhx                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-330914             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-330914    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-829lp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-330914             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-6g4cz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bpdjl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-330914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-330914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node old-k8s-version-330914 event: Registered Node old-k8s-version-330914 in Controller
	  Normal  NodeReady                91s                  kubelet          Node old-k8s-version-330914 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x9 over 56s)    kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node old-k8s-version-330914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x7 over 56s)    kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-330914 event: Registered Node old-k8s-version-330914 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [e7c9e2373d25df292a06c5e68b12ca31b0890e6f5f98c7704a6a20c7acce02f7] <==
	{"level":"info","ts":"2025-10-26T15:11:45.352417Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:11:45.352452Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:11:45.352529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-26T15:11:45.352633Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-26T15:11:45.352819Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:11:45.352904Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:11:45.355485Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T15:11:45.355653Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-26T15:11:45.35569Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-26T15:11:45.355849Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T15:11:45.355912Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T15:11:46.343993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T15:11:46.344036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T15:11:46.344077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-26T15:11:46.344096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T15:11:46.344104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-26T15:11:46.344114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-26T15:11:46.344121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-26T15:11:46.34566Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-330914 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T15:11:46.345664Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:11:46.345719Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:11:46.345896Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T15:11:46.345921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T15:11:46.34698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-26T15:11:46.346975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:12:40 up  2:55,  0 user,  load average: 2.46, 2.42, 1.67
	Linux old-k8s-version-330914 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bcba52fd1283c6a8528b225e7149f8ad6f13d72ccdf6c221344f3d60fb7c2912] <==
	I1026 15:11:48.353247       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:11:48.353543       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:11:48.353742       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:11:48.353766       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:11:48.353781       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:11:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:11:48.560249       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:11:48.560301       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:11:48.560319       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:11:48.560659       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:11:48.961122       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:11:48.961154       1 metrics.go:72] Registering metrics
	I1026 15:11:48.961250       1 controller.go:711] "Syncing nftables rules"
	I1026 15:11:58.561238       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:11:58.561350       1 main.go:301] handling current node
	I1026 15:12:08.560364       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:12:08.560409       1 main.go:301] handling current node
	I1026 15:12:18.560330       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:12:18.560370       1 main.go:301] handling current node
	I1026 15:12:28.564808       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:12:28.564850       1 main.go:301] handling current node
	I1026 15:12:38.566321       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:12:38.566372       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ebe6998e952fa61da87a8c37ca602b0f2ebdf5f7cf4025c9fd2507b770af8504] <==
	I1026 15:11:47.411032       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 15:11:47.411056       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 15:11:47.411219       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 15:11:47.411503       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 15:11:47.411699       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 15:11:47.412208       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:11:47.413739       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 15:11:47.413775       1 aggregator.go:166] initial CRD sync complete...
	I1026 15:11:47.413783       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 15:11:47.413796       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:11:47.413803       1 cache.go:39] Caches are synced for autoregister controller
	E1026 15:11:47.417240       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:11:47.467679       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:11:47.468463       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 15:11:48.302152       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 15:11:48.314617       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:11:48.337863       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 15:11:48.358698       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:11:48.366805       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:11:48.376228       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 15:11:48.421950       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.179.162"}
	I1026 15:11:48.437752       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.227.95"}
	I1026 15:11:59.611159       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 15:11:59.655218       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:11:59.739281       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [14610085016dbaf8341ce666f39a20518090a5e59a40d14c2f08730cc477f696] <==
	I1026 15:11:59.783838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="127.641µs"
	I1026 15:11:59.785901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.652µs"
	I1026 15:11:59.789932       1 shared_informer.go:318] Caches are synced for cronjob
	I1026 15:11:59.795983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.915µs"
	I1026 15:11:59.806062       1 shared_informer.go:318] Caches are synced for taint
	I1026 15:11:59.806180       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1026 15:11:59.806310       1 taint_manager.go:211] "Sending events to api server"
	I1026 15:11:59.806374       1 event.go:307] "Event occurred" object="old-k8s-version-330914" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-330914 event: Registered Node old-k8s-version-330914 in Controller"
	I1026 15:11:59.806205       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1026 15:11:59.806621       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-330914"
	I1026 15:11:59.806735       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1026 15:11:59.830210       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 15:11:59.853576       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 15:12:00.173861       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:12:00.188333       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:12:00.188370       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 15:12:02.874883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="142.873µs"
	I1026 15:12:03.881403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="133.281µs"
	I1026 15:12:04.884706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.564µs"
	I1026 15:12:06.895443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.68245ms"
	I1026 15:12:06.895552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.081µs"
	I1026 15:12:21.931349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.386µs"
	I1026 15:12:23.256108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.957355ms"
	I1026 15:12:23.256287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.122µs"
	I1026 15:12:30.079943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="109.734µs"
	
	
	==> kube-proxy [8d54d1c865642a190dabbe2a4e3938bf3b3c9343a8c8d4d402b72a694a82f3bc] <==
	I1026 15:11:48.207584       1 server_others.go:69] "Using iptables proxy"
	I1026 15:11:48.217361       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1026 15:11:48.238276       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:11:48.240637       1 server_others.go:152] "Using iptables Proxier"
	I1026 15:11:48.240673       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 15:11:48.240685       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 15:11:48.240720       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 15:11:48.241037       1 server.go:846] "Version info" version="v1.28.0"
	I1026 15:11:48.241103       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:11:48.241863       1 config.go:97] "Starting endpoint slice config controller"
	I1026 15:11:48.242565       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 15:11:48.242751       1 config.go:315] "Starting node config controller"
	I1026 15:11:48.242761       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 15:11:48.243200       1 config.go:188] "Starting service config controller"
	I1026 15:11:48.243360       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 15:11:48.343444       1 shared_informer.go:318] Caches are synced for node config
	I1026 15:11:48.343455       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 15:11:48.343605       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [57862b704429a1e7b57796a2620311a2e27ce616153a415ac4d41876a1582708] <==
	I1026 15:11:45.694266       1 serving.go:348] Generated self-signed cert in-memory
	W1026 15:11:47.350433       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:11:47.350470       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:11:47.350484       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:11:47.350495       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:11:47.370305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1026 15:11:47.370342       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:11:47.372035       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:11:47.372138       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 15:11:47.373138       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 15:11:47.373506       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 15:11:47.473252       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.767518     724 topology_manager.go:215] "Topology Admit Handler" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-6g4cz"
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.945074     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cb51d6f6-61ac-4b04-875f-2daec24a4210-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-6g4cz\" (UID: \"cb51d6f6-61ac-4b04-875f-2daec24a4210\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz"
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.945143     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts7jv\" (UniqueName: \"kubernetes.io/projected/662c14a7-1a94-4d0c-b7e0-9c2d8eef8724-kube-api-access-ts7jv\") pod \"kubernetes-dashboard-8694d4445c-bpdjl\" (UID: \"662c14a7-1a94-4d0c-b7e0-9c2d8eef8724\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl"
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.945196     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/662c14a7-1a94-4d0c-b7e0-9c2d8eef8724-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-bpdjl\" (UID: \"662c14a7-1a94-4d0c-b7e0-9c2d8eef8724\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl"
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.945294     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf58s\" (UniqueName: \"kubernetes.io/projected/cb51d6f6-61ac-4b04-875f-2daec24a4210-kube-api-access-xf58s\") pod \"dashboard-metrics-scraper-5f989dc9cf-6g4cz\" (UID: \"cb51d6f6-61ac-4b04-875f-2daec24a4210\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz"
	Oct 26 15:12:02 old-k8s-version-330914 kubelet[724]: I1026 15:12:02.863295     724 scope.go:117] "RemoveContainer" containerID="5c3ee1b7015d7e29d7597e1c6398773f27630790401ed668d1ae2541726835bb"
	Oct 26 15:12:03 old-k8s-version-330914 kubelet[724]: I1026 15:12:03.867521     724 scope.go:117] "RemoveContainer" containerID="5c3ee1b7015d7e29d7597e1c6398773f27630790401ed668d1ae2541726835bb"
	Oct 26 15:12:03 old-k8s-version-330914 kubelet[724]: I1026 15:12:03.867855     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:03 old-k8s-version-330914 kubelet[724]: E1026 15:12:03.868217     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:04 old-k8s-version-330914 kubelet[724]: I1026 15:12:04.871743     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:04 old-k8s-version-330914 kubelet[724]: E1026 15:12:04.872110     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:06 old-k8s-version-330914 kubelet[724]: I1026 15:12:06.889424     724 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl" podStartSLOduration=1.962032566 podCreationTimestamp="2025-10-26 15:11:59 +0000 UTC" firstStartedPulling="2025-10-26 15:12:00.094157732 +0000 UTC m=+15.391600501" lastFinishedPulling="2025-10-26 15:12:06.021485804 +0000 UTC m=+21.318928560" observedRunningTime="2025-10-26 15:12:06.889133962 +0000 UTC m=+22.186576750" watchObservedRunningTime="2025-10-26 15:12:06.889360625 +0000 UTC m=+22.186803399"
	Oct 26 15:12:10 old-k8s-version-330914 kubelet[724]: I1026 15:12:10.068707     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:10 old-k8s-version-330914 kubelet[724]: E1026 15:12:10.069127     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:18 old-k8s-version-330914 kubelet[724]: I1026 15:12:18.908679     724 scope.go:117] "RemoveContainer" containerID="9d7f5b66a3f13ea53acbb40e7d705efc2a46e95c15e0215793c795a76ecbaef1"
	Oct 26 15:12:21 old-k8s-version-330914 kubelet[724]: I1026 15:12:21.796932     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:21 old-k8s-version-330914 kubelet[724]: I1026 15:12:21.919093     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:21 old-k8s-version-330914 kubelet[724]: I1026 15:12:21.919366     724 scope.go:117] "RemoveContainer" containerID="8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8"
	Oct 26 15:12:21 old-k8s-version-330914 kubelet[724]: E1026 15:12:21.919747     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:30 old-k8s-version-330914 kubelet[724]: I1026 15:12:30.069092     724 scope.go:117] "RemoveContainer" containerID="8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8"
	Oct 26 15:12:30 old-k8s-version-330914 kubelet[724]: E1026 15:12:30.069540     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:37 old-k8s-version-330914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:12:37 old-k8s-version-330914 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:12:37 old-k8s-version-330914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:12:37 old-k8s-version-330914 systemd[1]: kubelet.service: Consumed 1.592s CPU time.
	
	
	==> kubernetes-dashboard [0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a] <==
	2025/10/26 15:12:06 Using namespace: kubernetes-dashboard
	2025/10/26 15:12:06 Using in-cluster config to connect to apiserver
	2025/10/26 15:12:06 Using secret token for csrf signing
	2025/10/26 15:12:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:12:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:12:06 Successful initial request to the apiserver, version: v1.28.0
	2025/10/26 15:12:06 Generating JWE encryption key
	2025/10/26 15:12:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:12:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:12:06 Initializing JWE encryption key from synchronized object
	2025/10/26 15:12:06 Creating in-cluster Sidecar client
	2025/10/26 15:12:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:12:06 Serving insecurely on HTTP port: 9090
	2025/10/26 15:12:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:12:06 Starting overwatch
	
	
	==> storage-provisioner [72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a] <==
	I1026 15:12:18.962334       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:12:18.972544       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:12:18.972596       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 15:12:36.368801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:12:36.369018       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330914_eefb52e5-4023-4ca4-a96b-3f3172d039c2!
	I1026 15:12:36.368930       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6451c6cc-4615-4622-b59c-d1296145dee3", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-330914_eefb52e5-4023-4ca4-a96b-3f3172d039c2 became leader
	I1026 15:12:36.469394       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330914_eefb52e5-4023-4ca4-a96b-3f3172d039c2!
	
	
	==> storage-provisioner [9d7f5b66a3f13ea53acbb40e7d705efc2a46e95c15e0215793c795a76ecbaef1] <==
	I1026 15:11:48.179130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:12:18.181924       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330914 -n old-k8s-version-330914
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330914 -n old-k8s-version-330914: exit status 2 (347.27301ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-330914 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-330914
helpers_test.go:243: (dbg) docker inspect old-k8s-version-330914:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe",
	        "Created": "2025-10-26T15:10:26.438664017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1086807,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:11:38.127982725Z",
	            "FinishedAt": "2025-10-26T15:11:37.229715977Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/hosts",
	        "LogPath": "/var/lib/docker/containers/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe/91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe-json.log",
	        "Name": "/old-k8s-version-330914",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-330914:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-330914",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "91ae2e5aad345c2e0703f327fd036502476cd376cb2a6c583db438ed9b0ac0fe",
	                "LowerDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1ed9fa6c3b37e53a285735adb39a4961c8ca3dc94f31480b0cfd0d1b96fc7a86/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-330914",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-330914/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-330914",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-330914",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-330914",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bf325b359f01d99f861ac20000363893f8802fb28f33bafd4d0f7af6c69650a4",
	            "SandboxKey": "/var/run/docker/netns/bf325b359f01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-330914": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:3d:6e:c3:3e:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56ce3fb526f5012c2231b9293c9ece449bc551903b4972b11997763e4592ce3f",
	                    "EndpointID": "415724bd1a1f64b6c859cc16e71f69fd10cc9d856e62d55af7b9efbbf1ee7731",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-330914",
	                        "91ae2e5aad34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330914 -n old-k8s-version-330914
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330914 -n old-k8s-version-330914: exit status 2 (356.61435ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-330914 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-330914 logs -n 25: (1.26083008s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-flag-391593                                                                                                                                                                                                                  │ force-systemd-flag-391593 │ jenkins │ v1.37.0 │ 26 Oct 25 15:09 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p cert-options-124833 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ stop    │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p NoKubernetes-917490 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ ssh     │ -p NoKubernetes-917490 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │                     │
	│ delete  │ -p NoKubernetes-917490                                                                                                                                                                                                                        │ NoKubernetes-917490       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ ssh     │ cert-options-124833 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ ssh     │ -p cert-options-124833 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ delete  │ -p cert-options-124833                                                                                                                                                                                                                        │ cert-options-124833       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-330914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ stop    │ -p old-k8s-version-330914 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-475081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ stop    │ -p no-preload-475081 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-330914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ addons  │ enable dashboard -p no-preload-475081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081         │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-176599 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-176599 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p kubernetes-upgrade-176599                                                                                                                                                                                                                  │ kubernetes-upgrade-176599 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-535130        │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ image   │ old-k8s-version-330914 image list --format=json                                                                                                                                                                                               │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p old-k8s-version-330914 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-330914    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:12:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:12:22.723695 1094884 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:12:22.723977 1094884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:22.723989 1094884 out.go:374] Setting ErrFile to fd 2...
	I1026 15:12:22.723995 1094884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:22.724291 1094884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:12:22.724794 1094884 out.go:368] Setting JSON to false
	I1026 15:12:22.726080 1094884 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10491,"bootTime":1761481052,"procs":413,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:12:22.726194 1094884 start.go:141] virtualization: kvm guest
	I1026 15:12:22.728318 1094884 out.go:179] * [embed-certs-535130] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:12:22.729604 1094884 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:12:22.729606 1094884 notify.go:220] Checking for updates...
	I1026 15:12:22.732660 1094884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:12:22.734078 1094884 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:12:22.735315 1094884 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:12:22.736302 1094884 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:12:22.737366 1094884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:12:22.738837 1094884 config.go:182] Loaded profile config "cert-expiration-619245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:22.738935 1094884 config.go:182] Loaded profile config "no-preload-475081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:22.739013 1094884 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:12:22.739113 1094884 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:12:22.764422 1094884 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:12:22.764534 1094884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:22.829223 1094884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-26 15:12:22.816741758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:22.829376 1094884 docker.go:318] overlay module found
	I1026 15:12:22.832034 1094884 out.go:179] * Using the docker driver based on user configuration
	W1026 15:12:18.001061 1086607 pod_ready.go:104] pod "coredns-5dd5756b68-hzjqn" is not "Ready", error: <nil>
	W1026 15:12:20.003024 1086607 pod_ready.go:104] pod "coredns-5dd5756b68-hzjqn" is not "Ready", error: <nil>
	W1026 15:12:22.003141 1086607 pod_ready.go:104] pod "coredns-5dd5756b68-hzjqn" is not "Ready", error: <nil>
	I1026 15:12:22.833219 1094884 start.go:305] selected driver: docker
	I1026 15:12:22.833236 1094884 start.go:925] validating driver "docker" against <nil>
	I1026 15:12:22.833255 1094884 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:12:22.833817 1094884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:22.893827 1094884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-26 15:12:22.883069758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:22.894093 1094884 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:12:22.894326 1094884 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:12:22.895696 1094884 out.go:179] * Using Docker driver with root privileges
	I1026 15:12:22.896861 1094884 cni.go:84] Creating CNI manager for ""
	I1026 15:12:22.896952 1094884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:22.896969 1094884 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:12:22.897079 1094884 start.go:349] cluster config:
	{Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:22.898546 1094884 out.go:179] * Starting "embed-certs-535130" primary control-plane node in "embed-certs-535130" cluster
	I1026 15:12:22.899674 1094884 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:12:22.900838 1094884 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:12:22.901910 1094884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:22.901967 1094884 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:12:22.901983 1094884 cache.go:58] Caching tarball of preloaded images
	I1026 15:12:22.902045 1094884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:12:22.902150 1094884 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:12:22.902201 1094884 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:12:22.902353 1094884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json ...
	I1026 15:12:22.902381 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json: {Name:mk12a66b75728d08ad27e4045a242e76128ff185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:22.925433 1094884 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:12:22.925455 1094884 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:12:22.925472 1094884 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:12:22.925507 1094884 start.go:360] acquireMachinesLock for embed-certs-535130: {Name:mk2308f6e6d84ecfdd2789c813704db715591895 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:12:22.925609 1094884 start.go:364] duration metric: took 84.211µs to acquireMachinesLock for "embed-certs-535130"
	I1026 15:12:22.925633 1094884 start.go:93] Provisioning new machine with config: &{Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:12:22.925700 1094884 start.go:125] createHost starting for "" (driver="docker")
	W1026 15:12:19.071838 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	W1026 15:12:21.570936 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	I1026 15:12:23.502675 1086607 pod_ready.go:94] pod "coredns-5dd5756b68-hzjqn" is "Ready"
	I1026 15:12:23.502703 1086607 pod_ready.go:86] duration metric: took 34.507438685s for pod "coredns-5dd5756b68-hzjqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.506504 1086607 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.511539 1086607 pod_ready.go:94] pod "etcd-old-k8s-version-330914" is "Ready"
	I1026 15:12:23.511569 1086607 pod_ready.go:86] duration metric: took 5.033388ms for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.515140 1086607 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.520139 1086607 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-330914" is "Ready"
	I1026 15:12:23.520198 1086607 pod_ready.go:86] duration metric: took 4.997939ms for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.523393 1086607 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.700379 1086607 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-330914" is "Ready"
	I1026 15:12:23.700409 1086607 pod_ready.go:86] duration metric: took 176.992551ms for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.900733 1086607 pod_ready.go:83] waiting for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.299617 1086607 pod_ready.go:94] pod "kube-proxy-829lp" is "Ready"
	I1026 15:12:24.299649 1086607 pod_ready.go:86] duration metric: took 398.889482ms for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.500562 1086607 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.900567 1086607 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-330914" is "Ready"
	I1026 15:12:24.900600 1086607 pod_ready.go:86] duration metric: took 400.008062ms for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.900617 1086607 pod_ready.go:40] duration metric: took 35.916930354s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:12:24.950321 1086607 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1026 15:12:24.955073 1086607 out.go:203] 
	W1026 15:12:24.956447 1086607 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:12:24.957576 1086607 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:12:24.958913 1086607 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-330914" cluster and "default" namespace by default
	I1026 15:12:22.927779 1094884 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:12:22.928008 1094884 start.go:159] libmachine.API.Create for "embed-certs-535130" (driver="docker")
	I1026 15:12:22.928043 1094884 client.go:168] LocalClient.Create starting
	I1026 15:12:22.928138 1094884 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:12:22.928224 1094884 main.go:141] libmachine: Decoding PEM data...
	I1026 15:12:22.928244 1094884 main.go:141] libmachine: Parsing certificate...
	I1026 15:12:22.928320 1094884 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:12:22.928345 1094884 main.go:141] libmachine: Decoding PEM data...
	I1026 15:12:22.928354 1094884 main.go:141] libmachine: Parsing certificate...
	I1026 15:12:22.928694 1094884 cli_runner.go:164] Run: docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:12:22.947434 1094884 cli_runner.go:211] docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:12:22.947544 1094884 network_create.go:284] running [docker network inspect embed-certs-535130] to gather additional debugging logs...
	I1026 15:12:22.947572 1094884 cli_runner.go:164] Run: docker network inspect embed-certs-535130
	W1026 15:12:22.965884 1094884 cli_runner.go:211] docker network inspect embed-certs-535130 returned with exit code 1
	I1026 15:12:22.965918 1094884 network_create.go:287] error running [docker network inspect embed-certs-535130]: docker network inspect embed-certs-535130: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-535130 not found
	I1026 15:12:22.965936 1094884 network_create.go:289] output of [docker network inspect embed-certs-535130]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-535130 not found
	
	** /stderr **
	I1026 15:12:22.966046 1094884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:12:22.985557 1094884 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:12:22.986359 1094884 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:12:22.987196 1094884 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:12:22.988126 1094884 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec6510}
	I1026 15:12:22.988153 1094884 network_create.go:124] attempt to create docker network embed-certs-535130 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 15:12:22.988258 1094884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-535130 embed-certs-535130
	I1026 15:12:23.053788 1094884 network_create.go:108] docker network embed-certs-535130 192.168.76.0/24 created
	I1026 15:12:23.053820 1094884 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-535130" container
	I1026 15:12:23.053922 1094884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:12:23.073511 1094884 cli_runner.go:164] Run: docker volume create embed-certs-535130 --label name.minikube.sigs.k8s.io=embed-certs-535130 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:12:23.092193 1094884 oci.go:103] Successfully created a docker volume embed-certs-535130
	I1026 15:12:23.092294 1094884 cli_runner.go:164] Run: docker run --rm --name embed-certs-535130-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-535130 --entrypoint /usr/bin/test -v embed-certs-535130:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:12:23.512406 1094884 oci.go:107] Successfully prepared a docker volume embed-certs-535130
	I1026 15:12:23.512440 1094884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:23.512464 1094884 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:12:23.512541 1094884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-535130:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1026 15:12:24.071766 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	W1026 15:12:26.570742 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	I1026 15:12:28.044544 1094884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-535130:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.531951929s)
	I1026 15:12:28.044587 1094884 kic.go:203] duration metric: took 4.532116219s to extract preloaded images to volume ...
	W1026 15:12:28.044702 1094884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:12:28.044786 1094884 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:12:28.044853 1094884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:12:28.105477 1094884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-535130 --name embed-certs-535130 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-535130 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-535130 --network embed-certs-535130 --ip 192.168.76.2 --volume embed-certs-535130:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:12:28.395695 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Running}}
	I1026 15:12:28.416487 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:12:28.437229 1094884 cli_runner.go:164] Run: docker exec embed-certs-535130 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:12:28.483324 1094884 oci.go:144] the created container "embed-certs-535130" has a running status.
	I1026 15:12:28.483369 1094884 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa...
	I1026 15:12:29.157005 1094884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:12:29.183422 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:12:29.201144 1094884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:12:29.201180 1094884 kic_runner.go:114] Args: [docker exec --privileged embed-certs-535130 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:12:29.249224 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:12:29.269108 1094884 machine.go:93] provisionDockerMachine start ...
	I1026 15:12:29.269252 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:29.287870 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.288147 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:29.288181 1094884 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:12:29.432484 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:12:29.432520 1094884 ubuntu.go:182] provisioning hostname "embed-certs-535130"
	I1026 15:12:29.432600 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:29.451595 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.451814 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:29.451827 1094884 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-535130 && echo "embed-certs-535130" | sudo tee /etc/hostname
	I1026 15:12:29.605852 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:12:29.605944 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:29.625782 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.626088 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:29.626119 1094884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-535130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-535130/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-535130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:12:29.770338 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:12:29.770375 1094884 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:12:29.770428 1094884 ubuntu.go:190] setting up certificates
	I1026 15:12:29.770450 1094884 provision.go:84] configureAuth start
	I1026 15:12:29.770518 1094884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:12:29.789696 1094884 provision.go:143] copyHostCerts
	I1026 15:12:29.789762 1094884 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:12:29.789773 1094884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:12:29.789856 1094884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:12:29.789987 1094884 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:12:29.789999 1094884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:12:29.790049 1094884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:12:29.790145 1094884 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:12:29.790156 1094884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:12:29.790206 1094884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:12:29.790284 1094884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.embed-certs-535130 san=[127.0.0.1 192.168.76.2 embed-certs-535130 localhost minikube]
	I1026 15:12:30.082527 1094884 provision.go:177] copyRemoteCerts
	I1026 15:12:30.082582 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:12:30.082620 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.101581 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.204007 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:12:30.225022 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:12:30.242962 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:12:30.260627 1094884 provision.go:87] duration metric: took 490.157243ms to configureAuth
	I1026 15:12:30.260655 1094884 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:12:30.260857 1094884 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:30.260976 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.279328 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:30.279545 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:30.279561 1094884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:12:30.540929 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:12:30.540953 1094884 machine.go:96] duration metric: took 1.27182251s to provisionDockerMachine
	I1026 15:12:30.540967 1094884 client.go:171] duration metric: took 7.612915574s to LocalClient.Create
	I1026 15:12:30.540991 1094884 start.go:167] duration metric: took 7.612983362s to libmachine.API.Create "embed-certs-535130"
	I1026 15:12:30.541001 1094884 start.go:293] postStartSetup for "embed-certs-535130" (driver="docker")
	I1026 15:12:30.541015 1094884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:12:30.541083 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:12:30.541145 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.560194 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.666065 1094884 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:12:30.669831 1094884 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:12:30.669865 1094884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:12:30.669877 1094884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:12:30.669933 1094884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:12:30.670044 1094884 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:12:30.670157 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:12:30.678218 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:12:30.700030 1094884 start.go:296] duration metric: took 159.014656ms for postStartSetup
	I1026 15:12:30.700424 1094884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:12:30.720118 1094884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json ...
	I1026 15:12:30.720413 1094884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:12:30.720465 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.739104 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.837679 1094884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:12:30.842561 1094884 start.go:128] duration metric: took 7.916843227s to createHost
	I1026 15:12:30.842593 1094884 start.go:83] releasing machines lock for "embed-certs-535130", held for 7.916973049s
	I1026 15:12:30.842682 1094884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:12:30.861500 1094884 ssh_runner.go:195] Run: cat /version.json
	I1026 15:12:30.861556 1094884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:12:30.861562 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.861619 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.880085 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.880552 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:31.043055 1094884 ssh_runner.go:195] Run: systemctl --version
	I1026 15:12:31.050442 1094884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:12:31.091997 1094884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:12:31.097046 1094884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:12:31.097112 1094884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:12:31.124040 1094884 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:12:31.124067 1094884 start.go:495] detecting cgroup driver to use...
	I1026 15:12:31.124106 1094884 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:12:31.124152 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:12:31.143171 1094884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:12:31.157567 1094884 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:12:31.157636 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:12:31.175501 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:12:31.195107 1094884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:12:31.280916 1094884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:12:31.370324 1094884 docker.go:234] disabling docker service ...
	I1026 15:12:31.370389 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:12:31.391038 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:12:31.405225 1094884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:12:31.494860 1094884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:12:31.581190 1094884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:12:31.595100 1094884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:12:31.610576 1094884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:12:31.610643 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.621702 1094884 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:12:31.621772 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.631933 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.641706 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.652631 1094884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:12:31.662065 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.672261 1094884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.687254 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.697622 1094884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:12:31.705869 1094884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:12:31.714245 1094884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:12:31.797931 1094884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:12:31.907320 1094884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:12:31.907394 1094884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:12:31.911700 1094884 start.go:563] Will wait 60s for crictl version
	I1026 15:12:31.911755 1094884 ssh_runner.go:195] Run: which crictl
	I1026 15:12:31.916061 1094884 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:12:31.941571 1094884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:12:31.941644 1094884 ssh_runner.go:195] Run: crio --version
	I1026 15:12:31.971039 1094884 ssh_runner.go:195] Run: crio --version
	I1026 15:12:32.004653 1094884 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1026 15:12:28.572313 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	I1026 15:12:31.070880 1087870 pod_ready.go:94] pod "coredns-66bc5c9577-knr22" is "Ready"
	I1026 15:12:31.070915 1087870 pod_ready.go:86] duration metric: took 37.006499908s for pod "coredns-66bc5c9577-knr22" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.073905 1087870 pod_ready.go:83] waiting for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.078107 1087870 pod_ready.go:94] pod "etcd-no-preload-475081" is "Ready"
	I1026 15:12:31.078138 1087870 pod_ready.go:86] duration metric: took 4.207111ms for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.080180 1087870 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.084392 1087870 pod_ready.go:94] pod "kube-apiserver-no-preload-475081" is "Ready"
	I1026 15:12:31.084426 1087870 pod_ready.go:86] duration metric: took 4.226805ms for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.088708 1087870 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.269156 1087870 pod_ready.go:94] pod "kube-controller-manager-no-preload-475081" is "Ready"
	I1026 15:12:31.269206 1087870 pod_ready.go:86] duration metric: took 180.476065ms for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.468623 1087870 pod_ready.go:83] waiting for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.869293 1087870 pod_ready.go:94] pod "kube-proxy-smtlg" is "Ready"
	I1026 15:12:31.869330 1087870 pod_ready.go:86] duration metric: took 400.674816ms for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:32.068930 1087870 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:32.468962 1087870 pod_ready.go:94] pod "kube-scheduler-no-preload-475081" is "Ready"
	I1026 15:12:32.468992 1087870 pod_ready.go:86] duration metric: took 400.035699ms for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:32.469006 1087870 pod_ready.go:40] duration metric: took 38.40815001s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:12:32.526497 1087870 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:12:32.529763 1087870 out.go:179] * Done! kubectl is now configured to use "no-preload-475081" cluster and "default" namespace by default
	I1026 15:12:32.005880 1094884 cli_runner.go:164] Run: docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:12:32.024572 1094884 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:12:32.029206 1094884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:12:32.040870 1094884 kubeadm.go:883] updating cluster {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:12:32.041002 1094884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:32.041061 1094884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:12:32.075869 1094884 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:12:32.075897 1094884 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:12:32.075949 1094884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:12:32.102439 1094884 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:12:32.102468 1094884 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:12:32.102478 1094884 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:12:32.102571 1094884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-535130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:12:32.102633 1094884 ssh_runner.go:195] Run: crio config
	I1026 15:12:32.149754 1094884 cni.go:84] Creating CNI manager for ""
	I1026 15:12:32.149778 1094884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:32.149796 1094884 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:12:32.149823 1094884 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-535130 NodeName:embed-certs-535130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:12:32.149988 1094884 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-535130"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:12:32.150086 1094884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:12:32.158464 1094884 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:12:32.158526 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:12:32.166272 1094884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:12:32.179046 1094884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:12:32.195352 1094884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:12:32.209747 1094884 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:12:32.213887 1094884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:12:32.224809 1094884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:12:32.308443 1094884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:12:32.338158 1094884 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130 for IP: 192.168.76.2
	I1026 15:12:32.338213 1094884 certs.go:195] generating shared ca certs ...
	I1026 15:12:32.338238 1094884 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.338410 1094884 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:12:32.338458 1094884 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:12:32.338469 1094884 certs.go:257] generating profile certs ...
	I1026 15:12:32.338529 1094884 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key
	I1026 15:12:32.338550 1094884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.crt with IP's: []
	I1026 15:12:32.566180 1094884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.crt ...
	I1026 15:12:32.566211 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.crt: {Name:mkd6d336e91342a08904be85dabf843a66ea95b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.566384 1094884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key ...
	I1026 15:12:32.566397 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key: {Name:mk4416c5b817100d65b64e109f73505f873e43f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.566477 1094884 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3
	I1026 15:12:32.566499 1094884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 15:12:32.754452 1094884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3 ...
	I1026 15:12:32.754486 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3: {Name:mkabb7862e92bef693c45258c1617506096cdb12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.754719 1094884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3 ...
	I1026 15:12:32.754740 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3: {Name:mk215afc3790eeabca9034d99e286de6a2066abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.754854 1094884 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt
	I1026 15:12:32.755001 1094884 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key
	I1026 15:12:32.755099 1094884 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key
	I1026 15:12:32.755124 1094884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt with IP's: []
	I1026 15:12:33.207302 1094884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt ...
	I1026 15:12:33.207334 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt: {Name:mk113bd43484e2aa10efeeed24889f71d62785e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:33.207519 1094884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key ...
	I1026 15:12:33.207536 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key: {Name:mkfed9208b9b01aa68dc5edcf9bb22e51125ffb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:33.207753 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:12:33.207808 1094884 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:12:33.207819 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:12:33.207838 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:12:33.207860 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:12:33.207882 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:12:33.207921 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:12:33.208583 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:12:33.227595 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:12:33.245914 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:12:33.265321 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:12:33.285592 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:12:33.304704 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:12:33.323649 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:12:33.344374 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:12:33.364210 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:12:33.385157 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:12:33.404256 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:12:33.423316 1094884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:12:33.436366 1094884 ssh_runner.go:195] Run: openssl version
	I1026 15:12:33.442667 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:12:33.451487 1094884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:12:33.455627 1094884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:12:33.455683 1094884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:12:33.492656 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:12:33.502994 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:12:33.512795 1094884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:33.517090 1094884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:33.517196 1094884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:33.552991 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:12:33.563221 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:12:33.572631 1094884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:12:33.577148 1094884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:12:33.577254 1094884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:12:33.613124 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:12:33.622923 1094884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:12:33.627486 1094884 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:12:33.627550 1094884 kubeadm.go:400] StartCluster: {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:33.627624 1094884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:12:33.627672 1094884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:12:33.657036 1094884 cri.go:89] found id: ""
	I1026 15:12:33.657097 1094884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:12:33.665274 1094884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:12:33.673313 1094884 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:12:33.673363 1094884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:12:33.681347 1094884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:12:33.681368 1094884 kubeadm.go:157] found existing configuration files:
	
	I1026 15:12:33.681408 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:12:33.689860 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:12:33.689914 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:12:33.698191 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:12:33.706608 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:12:33.706671 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:12:33.715007 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:12:33.724484 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:12:33.724552 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:12:33.732614 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:12:33.740833 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:12:33.740886 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:12:33.748934 1094884 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:12:33.792631 1094884 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:12:33.792748 1094884 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:12:33.816843 1094884 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:12:33.816927 1094884 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:12:33.816973 1094884 kubeadm.go:318] OS: Linux
	I1026 15:12:33.817035 1094884 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:12:33.817132 1094884 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:12:33.817221 1094884 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:12:33.817300 1094884 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:12:33.817392 1094884 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:12:33.817469 1094884 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:12:33.817529 1094884 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:12:33.817610 1094884 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:12:33.878062 1094884 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:12:33.878236 1094884 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:12:33.878364 1094884 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:12:33.887538 1094884 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:12:33.889316 1094884 out.go:252]   - Generating certificates and keys ...
	I1026 15:12:33.889393 1094884 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:12:33.889456 1094884 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:12:34.292495 1094884 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:12:34.449436 1094884 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:12:34.657020 1094884 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:12:35.300215 1094884 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:12:35.661499 1094884 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:12:35.661692 1094884 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-535130 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:12:35.807387 1094884 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:12:35.807513 1094884 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-535130 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:12:35.865776 1094884 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:12:36.035254 1094884 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:12:36.141587 1094884 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:12:36.141681 1094884 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:12:36.336316 1094884 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:12:36.502661 1094884 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:12:37.100733 1094884 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:12:37.150513 1094884 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:12:37.345845 1094884 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:12:37.346412 1094884 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:12:37.350599 1094884 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:12:37.353330 1094884 out.go:252]   - Booting up control plane ...
	I1026 15:12:37.353462 1094884 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:12:37.353580 1094884 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:12:37.353685 1094884 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:12:37.367641 1094884 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:12:37.367803 1094884 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:12:37.375592 1094884 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:12:37.375779 1094884 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:12:37.375850 1094884 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:12:37.490942 1094884 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:12:37.491126 1094884 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 26 15:12:06 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:06.063199215Z" level=info msg="Created container 0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl/kubernetes-dashboard" id=99e9b325-8227-446f-a252-cb87389dd090 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:06 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:06.063865648Z" level=info msg="Starting container: 0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a" id=9cabf219-0463-4ee0-84b3-20d9456eeb56 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:06 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:06.065928639Z" level=info msg="Started container" PID=1718 containerID=0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl/kubernetes-dashboard id=9cabf219-0463-4ee0-84b3-20d9456eeb56 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6765c4ab322963bdcec7a093179bbeeb06478b5bc63a4ff4f37b5cca40f0a073
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.909281876Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a791a33c-68e1-4a00-b81e-6bd8deea0a01 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.910312241Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bcba90a4-788e-4576-a642-c335b7468756 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.911386375Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=bf06df24-460f-474c-a749-3680f218f849 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.911537608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.916613338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.916822353Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b305f456c4a6f57fd692bfe6437ccb5f23c47e9adc146b7d038be455f9711236/merged/etc/passwd: no such file or directory"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.916864142Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b305f456c4a6f57fd692bfe6437ccb5f23c47e9adc146b7d038be455f9711236/merged/etc/group: no such file or directory"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.917155837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.945258918Z" level=info msg="Created container 72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a: kube-system/storage-provisioner/storage-provisioner" id=bf06df24-460f-474c-a749-3680f218f849 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.945881314Z" level=info msg="Starting container: 72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a" id=d3035a25-90ed-4e88-b688-f03b56d3f742 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:18 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:18.948025841Z" level=info msg="Started container" PID=1743 containerID=72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a description=kube-system/storage-provisioner/storage-provisioner id=d3035a25-90ed-4e88-b688-f03b56d3f742 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c99c15e32a41e5744ea1c95f57acac94ef55400972ac88f73f76fc9c6f91487
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.797656237Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4592450a-2e67-456e-8e86-4e2b60631252 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.798649291Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=599dfcb4-3f73-4c55-b936-978bbbfcc6ab name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.799804971Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz/dashboard-metrics-scraper" id=6b7e20c0-5241-4146-bf84-13d820bfafbe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.799935158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.807576027Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.808128093Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.83252353Z" level=info msg="Created container 8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz/dashboard-metrics-scraper" id=6b7e20c0-5241-4146-bf84-13d820bfafbe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.833251908Z" level=info msg="Starting container: 8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8" id=6aa4350f-bc21-4427-b0f4-18674c26cfbe name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.835547879Z" level=info msg="Started container" PID=1759 containerID=8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz/dashboard-metrics-scraper id=6aa4350f-bc21-4427-b0f4-18674c26cfbe name=/runtime.v1.RuntimeService/StartContainer sandboxID=1fbd4949787fb995ad3d2e337f7f197305cee0131daf9f37eae4fb808033d11f
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.920419101Z" level=info msg="Removing container: 455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9" id=333e6ed7-c380-4303-9f02-a41da5b27a67 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:21 old-k8s-version-330914 crio[563]: time="2025-10-26T15:12:21.931120505Z" level=info msg="Removed container 455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz/dashboard-metrics-scraper" id=333e6ed7-c380-4303-9f02-a41da5b27a67 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8a771a5866228       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   1fbd4949787fb       dashboard-metrics-scraper-5f989dc9cf-6g4cz       kubernetes-dashboard
	72d2bf4d87687       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   9c99c15e32a41       storage-provisioner                              kube-system
	0c24c2a5f615f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   6765c4ab32296       kubernetes-dashboard-8694d4445c-bpdjl            kubernetes-dashboard
	b9b6726cc13f8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   ac3bb9b5e3857       coredns-5dd5756b68-hzjqn                         kube-system
	8c10864b97511       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   e37d7ef1b023b       busybox                                          default
	bcba52fd1283c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   18f01d150c8a7       kindnet-b8hhx                                    kube-system
	9d7f5b66a3f13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   9c99c15e32a41       storage-provisioner                              kube-system
	8d54d1c865642       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   ddc39a21c0216       kube-proxy-829lp                                 kube-system
	57862b704429a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   d550a78920fe8       kube-scheduler-old-k8s-version-330914            kube-system
	e7c9e2373d25d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   d7cb33b10ff0f       etcd-old-k8s-version-330914                      kube-system
	14610085016db       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   49cde6bc7729b       kube-controller-manager-old-k8s-version-330914   kube-system
	ebe6998e952fa       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   c694fbb72e963       kube-apiserver-old-k8s-version-330914            kube-system
	
	
	==> coredns [b9b6726cc13f8a84b43e30b07c19acad2e63b4378a8bf17b7d9363d787298f47] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:35576 - 37043 "HINFO IN 3461098357155546764.732809821893994727. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.076516391s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-330914
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-330914
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=old-k8s-version-330914
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_10_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:10:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-330914
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:12:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:12:18 +0000   Sun, 26 Oct 2025 15:10:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:12:18 +0000   Sun, 26 Oct 2025 15:10:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:12:18 +0000   Sun, 26 Oct 2025 15:10:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:12:18 +0000   Sun, 26 Oct 2025 15:11:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-330914
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7b3315c3-b9ce-4fbb-a096-582c49bc7b55
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-hzjqn                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-330914                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-b8hhx                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-330914             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-330914    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-829lp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-330914             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-6g4cz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-bpdjl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-330914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node old-k8s-version-330914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-330914 event: Registered Node old-k8s-version-330914 in Controller
	  Normal  NodeReady                93s                  kubelet          Node old-k8s-version-330914 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x9 over 58s)    kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)    kubelet          Node old-k8s-version-330914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x7 over 58s)    kubelet          Node old-k8s-version-330914 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                  node-controller  Node old-k8s-version-330914 event: Registered Node old-k8s-version-330914 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [e7c9e2373d25df292a06c5e68b12ca31b0890e6f5f98c7704a6a20c7acce02f7] <==
	{"level":"info","ts":"2025-10-26T15:11:45.352417Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:11:45.352452Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-26T15:11:45.352529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-26T15:11:45.352633Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-26T15:11:45.352819Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:11:45.352904Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:11:45.355485Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T15:11:45.355653Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-26T15:11:45.35569Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-26T15:11:45.355849Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T15:11:45.355912Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T15:11:46.343993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T15:11:46.344036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T15:11:46.344077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-26T15:11:46.344096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T15:11:46.344104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-26T15:11:46.344114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-26T15:11:46.344121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-26T15:11:46.34566Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-330914 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T15:11:46.345664Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:11:46.345719Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:11:46.345896Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T15:11:46.345921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T15:11:46.34698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-26T15:11:46.346975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:12:42 up  2:55,  0 user,  load average: 2.46, 2.42, 1.67
	Linux old-k8s-version-330914 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bcba52fd1283c6a8528b225e7149f8ad6f13d72ccdf6c221344f3d60fb7c2912] <==
	I1026 15:11:48.353247       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:11:48.353543       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:11:48.353742       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:11:48.353766       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:11:48.353781       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:11:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:11:48.560249       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:11:48.560301       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:11:48.560319       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:11:48.560659       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:11:48.961122       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:11:48.961154       1 metrics.go:72] Registering metrics
	I1026 15:11:48.961250       1 controller.go:711] "Syncing nftables rules"
	I1026 15:11:58.561238       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:11:58.561350       1 main.go:301] handling current node
	I1026 15:12:08.560364       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:12:08.560409       1 main.go:301] handling current node
	I1026 15:12:18.560330       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:12:18.560370       1 main.go:301] handling current node
	I1026 15:12:28.564808       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:12:28.564850       1 main.go:301] handling current node
	I1026 15:12:38.566321       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:12:38.566372       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ebe6998e952fa61da87a8c37ca602b0f2ebdf5f7cf4025c9fd2507b770af8504] <==
	I1026 15:11:47.411032       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 15:11:47.411056       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 15:11:47.411219       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 15:11:47.411503       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 15:11:47.411699       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 15:11:47.412208       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:11:47.413739       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 15:11:47.413775       1 aggregator.go:166] initial CRD sync complete...
	I1026 15:11:47.413783       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 15:11:47.413796       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:11:47.413803       1 cache.go:39] Caches are synced for autoregister controller
	E1026 15:11:47.417240       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:11:47.467679       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:11:47.468463       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 15:11:48.302152       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 15:11:48.314617       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:11:48.337863       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 15:11:48.358698       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:11:48.366805       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:11:48.376228       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 15:11:48.421950       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.179.162"}
	I1026 15:11:48.437752       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.227.95"}
	I1026 15:11:59.611159       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 15:11:59.655218       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:11:59.739281       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [14610085016dbaf8341ce666f39a20518090a5e59a40d14c2f08730cc477f696] <==
	I1026 15:11:59.783838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="127.641µs"
	I1026 15:11:59.785901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.652µs"
	I1026 15:11:59.789932       1 shared_informer.go:318] Caches are synced for cronjob
	I1026 15:11:59.795983       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.915µs"
	I1026 15:11:59.806062       1 shared_informer.go:318] Caches are synced for taint
	I1026 15:11:59.806180       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1026 15:11:59.806310       1 taint_manager.go:211] "Sending events to api server"
	I1026 15:11:59.806374       1 event.go:307] "Event occurred" object="old-k8s-version-330914" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-330914 event: Registered Node old-k8s-version-330914 in Controller"
	I1026 15:11:59.806205       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1026 15:11:59.806621       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-330914"
	I1026 15:11:59.806735       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1026 15:11:59.830210       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 15:11:59.853576       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 15:12:00.173861       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:12:00.188333       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 15:12:00.188370       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 15:12:02.874883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="142.873µs"
	I1026 15:12:03.881403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="133.281µs"
	I1026 15:12:04.884706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.564µs"
	I1026 15:12:06.895443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.68245ms"
	I1026 15:12:06.895552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.081µs"
	I1026 15:12:21.931349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.386µs"
	I1026 15:12:23.256108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.957355ms"
	I1026 15:12:23.256287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.122µs"
	I1026 15:12:30.079943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="109.734µs"
	
	
	==> kube-proxy [8d54d1c865642a190dabbe2a4e3938bf3b3c9343a8c8d4d402b72a694a82f3bc] <==
	I1026 15:11:48.207584       1 server_others.go:69] "Using iptables proxy"
	I1026 15:11:48.217361       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1026 15:11:48.238276       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:11:48.240637       1 server_others.go:152] "Using iptables Proxier"
	I1026 15:11:48.240673       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 15:11:48.240685       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 15:11:48.240720       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 15:11:48.241037       1 server.go:846] "Version info" version="v1.28.0"
	I1026 15:11:48.241103       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:11:48.241863       1 config.go:97] "Starting endpoint slice config controller"
	I1026 15:11:48.242565       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 15:11:48.242751       1 config.go:315] "Starting node config controller"
	I1026 15:11:48.242761       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 15:11:48.243200       1 config.go:188] "Starting service config controller"
	I1026 15:11:48.243360       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 15:11:48.343444       1 shared_informer.go:318] Caches are synced for node config
	I1026 15:11:48.343455       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 15:11:48.343605       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [57862b704429a1e7b57796a2620311a2e27ce616153a415ac4d41876a1582708] <==
	I1026 15:11:45.694266       1 serving.go:348] Generated self-signed cert in-memory
	W1026 15:11:47.350433       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:11:47.350470       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:11:47.350484       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:11:47.350495       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:11:47.370305       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1026 15:11:47.370342       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:11:47.372035       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:11:47.372138       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 15:11:47.373138       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 15:11:47.373506       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 15:11:47.473252       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.767518     724 topology_manager.go:215] "Topology Admit Handler" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-6g4cz"
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.945074     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cb51d6f6-61ac-4b04-875f-2daec24a4210-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-6g4cz\" (UID: \"cb51d6f6-61ac-4b04-875f-2daec24a4210\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz"
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.945143     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts7jv\" (UniqueName: \"kubernetes.io/projected/662c14a7-1a94-4d0c-b7e0-9c2d8eef8724-kube-api-access-ts7jv\") pod \"kubernetes-dashboard-8694d4445c-bpdjl\" (UID: \"662c14a7-1a94-4d0c-b7e0-9c2d8eef8724\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl"
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.945196     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/662c14a7-1a94-4d0c-b7e0-9c2d8eef8724-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-bpdjl\" (UID: \"662c14a7-1a94-4d0c-b7e0-9c2d8eef8724\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl"
	Oct 26 15:11:59 old-k8s-version-330914 kubelet[724]: I1026 15:11:59.945294     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf58s\" (UniqueName: \"kubernetes.io/projected/cb51d6f6-61ac-4b04-875f-2daec24a4210-kube-api-access-xf58s\") pod \"dashboard-metrics-scraper-5f989dc9cf-6g4cz\" (UID: \"cb51d6f6-61ac-4b04-875f-2daec24a4210\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz"
	Oct 26 15:12:02 old-k8s-version-330914 kubelet[724]: I1026 15:12:02.863295     724 scope.go:117] "RemoveContainer" containerID="5c3ee1b7015d7e29d7597e1c6398773f27630790401ed668d1ae2541726835bb"
	Oct 26 15:12:03 old-k8s-version-330914 kubelet[724]: I1026 15:12:03.867521     724 scope.go:117] "RemoveContainer" containerID="5c3ee1b7015d7e29d7597e1c6398773f27630790401ed668d1ae2541726835bb"
	Oct 26 15:12:03 old-k8s-version-330914 kubelet[724]: I1026 15:12:03.867855     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:03 old-k8s-version-330914 kubelet[724]: E1026 15:12:03.868217     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:04 old-k8s-version-330914 kubelet[724]: I1026 15:12:04.871743     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:04 old-k8s-version-330914 kubelet[724]: E1026 15:12:04.872110     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:06 old-k8s-version-330914 kubelet[724]: I1026 15:12:06.889424     724 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-bpdjl" podStartSLOduration=1.962032566 podCreationTimestamp="2025-10-26 15:11:59 +0000 UTC" firstStartedPulling="2025-10-26 15:12:00.094157732 +0000 UTC m=+15.391600501" lastFinishedPulling="2025-10-26 15:12:06.021485804 +0000 UTC m=+21.318928560" observedRunningTime="2025-10-26 15:12:06.889133962 +0000 UTC m=+22.186576750" watchObservedRunningTime="2025-10-26 15:12:06.889360625 +0000 UTC m=+22.186803399"
	Oct 26 15:12:10 old-k8s-version-330914 kubelet[724]: I1026 15:12:10.068707     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:10 old-k8s-version-330914 kubelet[724]: E1026 15:12:10.069127     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:18 old-k8s-version-330914 kubelet[724]: I1026 15:12:18.908679     724 scope.go:117] "RemoveContainer" containerID="9d7f5b66a3f13ea53acbb40e7d705efc2a46e95c15e0215793c795a76ecbaef1"
	Oct 26 15:12:21 old-k8s-version-330914 kubelet[724]: I1026 15:12:21.796932     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:21 old-k8s-version-330914 kubelet[724]: I1026 15:12:21.919093     724 scope.go:117] "RemoveContainer" containerID="455107a11f6b9d2caea8fae3d54bce3cfc713edc69e8d2655d0e4f9a5ecc54f9"
	Oct 26 15:12:21 old-k8s-version-330914 kubelet[724]: I1026 15:12:21.919366     724 scope.go:117] "RemoveContainer" containerID="8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8"
	Oct 26 15:12:21 old-k8s-version-330914 kubelet[724]: E1026 15:12:21.919747     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:30 old-k8s-version-330914 kubelet[724]: I1026 15:12:30.069092     724 scope.go:117] "RemoveContainer" containerID="8a771a5866228d024c56d769dc7c0deb97ef861cd37d504f2e0ead44a3d579b8"
	Oct 26 15:12:30 old-k8s-version-330914 kubelet[724]: E1026 15:12:30.069540     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6g4cz_kubernetes-dashboard(cb51d6f6-61ac-4b04-875f-2daec24a4210)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6g4cz" podUID="cb51d6f6-61ac-4b04-875f-2daec24a4210"
	Oct 26 15:12:37 old-k8s-version-330914 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:12:37 old-k8s-version-330914 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:12:37 old-k8s-version-330914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:12:37 old-k8s-version-330914 systemd[1]: kubelet.service: Consumed 1.592s CPU time.
	
	
	==> kubernetes-dashboard [0c24c2a5f615fda4210dcf32cae74fec2545fc2e38658db2f8992a93a3393c3a] <==
	2025/10/26 15:12:06 Using namespace: kubernetes-dashboard
	2025/10/26 15:12:06 Using in-cluster config to connect to apiserver
	2025/10/26 15:12:06 Using secret token for csrf signing
	2025/10/26 15:12:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:12:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:12:06 Successful initial request to the apiserver, version: v1.28.0
	2025/10/26 15:12:06 Generating JWE encryption key
	2025/10/26 15:12:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:12:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:12:06 Initializing JWE encryption key from synchronized object
	2025/10/26 15:12:06 Creating in-cluster Sidecar client
	2025/10/26 15:12:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:12:06 Serving insecurely on HTTP port: 9090
	2025/10/26 15:12:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:12:06 Starting overwatch
	
	
	==> storage-provisioner [72d2bf4d876877af13ced9989fac81433cfe9707f6cc1c40255eff4437e7cb7a] <==
	I1026 15:12:18.962334       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:12:18.972544       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:12:18.972596       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 15:12:36.368801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:12:36.369018       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330914_eefb52e5-4023-4ca4-a96b-3f3172d039c2!
	I1026 15:12:36.368930       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6451c6cc-4615-4622-b59c-d1296145dee3", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-330914_eefb52e5-4023-4ca4-a96b-3f3172d039c2 became leader
	I1026 15:12:36.469394       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-330914_eefb52e5-4023-4ca4-a96b-3f3172d039c2!
	
	
	==> storage-provisioner [9d7f5b66a3f13ea53acbb40e7d705efc2a46e95c15e0215793c795a76ecbaef1] <==
	I1026 15:11:48.179130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:12:18.181924       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330914 -n old-k8s-version-330914
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-330914 -n old-k8s-version-330914: exit status 2 (383.586852ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-330914 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-475081 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-475081 --alsologtostderr -v=1: exit status 80 (1.674978306s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-475081 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:12:44.351870 1099387 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:12:44.352131 1099387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:44.352139 1099387 out.go:374] Setting ErrFile to fd 2...
	I1026 15:12:44.352143 1099387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:44.352354 1099387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:12:44.352609 1099387 out.go:368] Setting JSON to false
	I1026 15:12:44.352665 1099387 mustload.go:65] Loading cluster: no-preload-475081
	I1026 15:12:44.353031 1099387 config.go:182] Loaded profile config "no-preload-475081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:44.353432 1099387 cli_runner.go:164] Run: docker container inspect no-preload-475081 --format={{.State.Status}}
	I1026 15:12:44.375280 1099387 host.go:66] Checking if "no-preload-475081" exists ...
	I1026 15:12:44.375712 1099387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:44.437456 1099387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-26 15:12:44.426293652 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:44.438127 1099387 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-475081 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:12:44.440248 1099387 out.go:179] * Pausing node no-preload-475081 ... 
	I1026 15:12:44.441406 1099387 host.go:66] Checking if "no-preload-475081" exists ...
	I1026 15:12:44.441670 1099387 ssh_runner.go:195] Run: systemctl --version
	I1026 15:12:44.441715 1099387 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-475081
	I1026 15:12:44.461464 1099387 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33837 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/no-preload-475081/id_rsa Username:docker}
	I1026 15:12:44.563879 1099387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:12:44.587816 1099387 pause.go:52] kubelet running: true
	I1026 15:12:44.587901 1099387 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:12:44.757293 1099387 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:12:44.757386 1099387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:12:44.833783 1099387 cri.go:89] found id: "25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2"
	I1026 15:12:44.833816 1099387 cri.go:89] found id: "c9a8e6dbea2afb3687eae7cf2bdc70948901868e83cb61c6a3824c7badb8f216"
	I1026 15:12:44.833822 1099387 cri.go:89] found id: "8db7d27d5a3179e5157c946470653bdf1401a6583999cfe0b6e584dbd4aa55da"
	I1026 15:12:44.833828 1099387 cri.go:89] found id: "fd565f0a0c107feb72e7717ce5647b8b1b147ee26d5fbd64db46807decac800f"
	I1026 15:12:44.833832 1099387 cri.go:89] found id: "4128b713bc3dafdb515ba3846752fb30d1e3a80d54c49f4b46aa2506000b8235"
	I1026 15:12:44.833836 1099387 cri.go:89] found id: "55addbe4a3d90ebe69842fd45024bf12a7a8de8c7e93f05e1323f03b190d25ec"
	I1026 15:12:44.833840 1099387 cri.go:89] found id: "72798a668fb70570d7f8691079339c46937ad357412930fe98e931819114ad86"
	I1026 15:12:44.833845 1099387 cri.go:89] found id: "c9e1c6df0d421d98d9ed1fd66b6c86206eb9055c5559f467f5d78f9891d1b67b"
	I1026 15:12:44.833849 1099387 cri.go:89] found id: "ca6f184d3a6d0f2f0031a61280ff5266dd116d977154881018aa85f3aa81d941"
	I1026 15:12:44.833872 1099387 cri.go:89] found id: "0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a"
	I1026 15:12:44.833876 1099387 cri.go:89] found id: "e82d8c73ee208681c543afe0a6794823e783821144b3f8cfefc86d3f34178a92"
	I1026 15:12:44.833881 1099387 cri.go:89] found id: ""
	I1026 15:12:44.833928 1099387 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:12:44.847419 1099387 retry.go:31] will retry after 322.835258ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:12:44Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:12:45.170990 1099387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:12:45.185485 1099387 pause.go:52] kubelet running: false
	I1026 15:12:45.185547 1099387 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:12:45.344394 1099387 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:12:45.344473 1099387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:12:45.426336 1099387 cri.go:89] found id: "25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2"
	I1026 15:12:45.426362 1099387 cri.go:89] found id: "c9a8e6dbea2afb3687eae7cf2bdc70948901868e83cb61c6a3824c7badb8f216"
	I1026 15:12:45.426368 1099387 cri.go:89] found id: "8db7d27d5a3179e5157c946470653bdf1401a6583999cfe0b6e584dbd4aa55da"
	I1026 15:12:45.426374 1099387 cri.go:89] found id: "fd565f0a0c107feb72e7717ce5647b8b1b147ee26d5fbd64db46807decac800f"
	I1026 15:12:45.426378 1099387 cri.go:89] found id: "4128b713bc3dafdb515ba3846752fb30d1e3a80d54c49f4b46aa2506000b8235"
	I1026 15:12:45.426382 1099387 cri.go:89] found id: "55addbe4a3d90ebe69842fd45024bf12a7a8de8c7e93f05e1323f03b190d25ec"
	I1026 15:12:45.426387 1099387 cri.go:89] found id: "72798a668fb70570d7f8691079339c46937ad357412930fe98e931819114ad86"
	I1026 15:12:45.426391 1099387 cri.go:89] found id: "c9e1c6df0d421d98d9ed1fd66b6c86206eb9055c5559f467f5d78f9891d1b67b"
	I1026 15:12:45.426396 1099387 cri.go:89] found id: "ca6f184d3a6d0f2f0031a61280ff5266dd116d977154881018aa85f3aa81d941"
	I1026 15:12:45.426405 1099387 cri.go:89] found id: "0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a"
	I1026 15:12:45.426409 1099387 cri.go:89] found id: "e82d8c73ee208681c543afe0a6794823e783821144b3f8cfefc86d3f34178a92"
	I1026 15:12:45.426413 1099387 cri.go:89] found id: ""
	I1026 15:12:45.426461 1099387 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:12:45.441158 1099387 retry.go:31] will retry after 219.536759ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:12:45Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:12:45.661635 1099387 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:12:45.677210 1099387 pause.go:52] kubelet running: false
	I1026 15:12:45.677280 1099387 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:12:45.837948 1099387 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:12:45.838052 1099387 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:12:45.916152 1099387 cri.go:89] found id: "25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2"
	I1026 15:12:45.916197 1099387 cri.go:89] found id: "c9a8e6dbea2afb3687eae7cf2bdc70948901868e83cb61c6a3824c7badb8f216"
	I1026 15:12:45.916202 1099387 cri.go:89] found id: "8db7d27d5a3179e5157c946470653bdf1401a6583999cfe0b6e584dbd4aa55da"
	I1026 15:12:45.916207 1099387 cri.go:89] found id: "fd565f0a0c107feb72e7717ce5647b8b1b147ee26d5fbd64db46807decac800f"
	I1026 15:12:45.916211 1099387 cri.go:89] found id: "4128b713bc3dafdb515ba3846752fb30d1e3a80d54c49f4b46aa2506000b8235"
	I1026 15:12:45.916215 1099387 cri.go:89] found id: "55addbe4a3d90ebe69842fd45024bf12a7a8de8c7e93f05e1323f03b190d25ec"
	I1026 15:12:45.916219 1099387 cri.go:89] found id: "72798a668fb70570d7f8691079339c46937ad357412930fe98e931819114ad86"
	I1026 15:12:45.916223 1099387 cri.go:89] found id: "c9e1c6df0d421d98d9ed1fd66b6c86206eb9055c5559f467f5d78f9891d1b67b"
	I1026 15:12:45.916227 1099387 cri.go:89] found id: "ca6f184d3a6d0f2f0031a61280ff5266dd116d977154881018aa85f3aa81d941"
	I1026 15:12:45.916238 1099387 cri.go:89] found id: "0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a"
	I1026 15:12:45.916246 1099387 cri.go:89] found id: "e82d8c73ee208681c543afe0a6794823e783821144b3f8cfefc86d3f34178a92"
	I1026 15:12:45.916251 1099387 cri.go:89] found id: ""
	I1026 15:12:45.916300 1099387 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:12:45.935395 1099387 out.go:203] 
	W1026 15:12:45.937463 1099387 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:12:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:12:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:12:45.937485 1099387 out.go:285] * 
	* 
	W1026 15:12:45.942950 1099387 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:12:45.945066 1099387 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-475081 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-475081
helpers_test.go:243: (dbg) docker inspect no-preload-475081:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f",
	        "Created": "2025-10-26T15:10:28.066508779Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1088209,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:11:43.497543651Z",
	            "FinishedAt": "2025-10-26T15:11:42.544687575Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/hosts",
	        "LogPath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f-json.log",
	        "Name": "/no-preload-475081",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-475081:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-475081",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f",
	                "LowerDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-475081",
	                "Source": "/var/lib/docker/volumes/no-preload-475081/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-475081",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-475081",
	                "name.minikube.sigs.k8s.io": "no-preload-475081",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e69a5fcb8a91227eb3b34dd1354020240fb88cb657cb102df54e5bb652f6290",
	            "SandboxKey": "/var/run/docker/netns/3e69a5fcb8a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33839"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-475081": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:a3:a9:68:2e:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da1bd6d7ce5203f11d1c54a9875cb6a6358a5bc321289fcb416f235a12121f07",
	                    "EndpointID": "71ace854f184b11ef48ec7244b9301bcd1a7f8995158afa343218d64554aff2b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-475081",
	                        "5e55f49a3db7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475081 -n no-preload-475081
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475081 -n no-preload-475081: exit status 2 (350.188804ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-475081 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-475081 logs -n 25: (1.327521578s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:12:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:12:22.723695 1094884 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:12:22.723977 1094884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:22.723989 1094884 out.go:374] Setting ErrFile to fd 2...
	I1026 15:12:22.723995 1094884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:22.724291 1094884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:12:22.724794 1094884 out.go:368] Setting JSON to false
	I1026 15:12:22.726080 1094884 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10491,"bootTime":1761481052,"procs":413,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:12:22.726194 1094884 start.go:141] virtualization: kvm guest
	I1026 15:12:22.728318 1094884 out.go:179] * [embed-certs-535130] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:12:22.729604 1094884 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:12:22.729606 1094884 notify.go:220] Checking for updates...
	I1026 15:12:22.732660 1094884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:12:22.734078 1094884 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:12:22.735315 1094884 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:12:22.736302 1094884 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:12:22.737366 1094884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:12:22.738837 1094884 config.go:182] Loaded profile config "cert-expiration-619245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:22.738935 1094884 config.go:182] Loaded profile config "no-preload-475081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:22.739013 1094884 config.go:182] Loaded profile config "old-k8s-version-330914": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:12:22.739113 1094884 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:12:22.764422 1094884 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:12:22.764534 1094884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:22.829223 1094884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-26 15:12:22.816741758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:22.829376 1094884 docker.go:318] overlay module found
	I1026 15:12:22.832034 1094884 out.go:179] * Using the docker driver based on user configuration
	W1026 15:12:18.001061 1086607 pod_ready.go:104] pod "coredns-5dd5756b68-hzjqn" is not "Ready", error: <nil>
	W1026 15:12:20.003024 1086607 pod_ready.go:104] pod "coredns-5dd5756b68-hzjqn" is not "Ready", error: <nil>
	W1026 15:12:22.003141 1086607 pod_ready.go:104] pod "coredns-5dd5756b68-hzjqn" is not "Ready", error: <nil>
	I1026 15:12:22.833219 1094884 start.go:305] selected driver: docker
	I1026 15:12:22.833236 1094884 start.go:925] validating driver "docker" against <nil>
	I1026 15:12:22.833255 1094884 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:12:22.833817 1094884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:22.893827 1094884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-26 15:12:22.883069758 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:22.894093 1094884 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:12:22.894326 1094884 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:12:22.895696 1094884 out.go:179] * Using Docker driver with root privileges
	I1026 15:12:22.896861 1094884 cni.go:84] Creating CNI manager for ""
	I1026 15:12:22.896952 1094884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:22.896969 1094884 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:12:22.897079 1094884 start.go:349] cluster config:
	{Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:22.898546 1094884 out.go:179] * Starting "embed-certs-535130" primary control-plane node in "embed-certs-535130" cluster
	I1026 15:12:22.899674 1094884 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:12:22.900838 1094884 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:12:22.901910 1094884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:22.901967 1094884 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:12:22.901983 1094884 cache.go:58] Caching tarball of preloaded images
	I1026 15:12:22.902045 1094884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:12:22.902150 1094884 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:12:22.902201 1094884 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:12:22.902353 1094884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json ...
	I1026 15:12:22.902381 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json: {Name:mk12a66b75728d08ad27e4045a242e76128ff185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:22.925433 1094884 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:12:22.925455 1094884 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:12:22.925472 1094884 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:12:22.925507 1094884 start.go:360] acquireMachinesLock for embed-certs-535130: {Name:mk2308f6e6d84ecfdd2789c813704db715591895 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:12:22.925609 1094884 start.go:364] duration metric: took 84.211µs to acquireMachinesLock for "embed-certs-535130"
	I1026 15:12:22.925633 1094884 start.go:93] Provisioning new machine with config: &{Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:12:22.925700 1094884 start.go:125] createHost starting for "" (driver="docker")
	W1026 15:12:19.071838 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	W1026 15:12:21.570936 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	I1026 15:12:23.502675 1086607 pod_ready.go:94] pod "coredns-5dd5756b68-hzjqn" is "Ready"
	I1026 15:12:23.502703 1086607 pod_ready.go:86] duration metric: took 34.507438685s for pod "coredns-5dd5756b68-hzjqn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.506504 1086607 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.511539 1086607 pod_ready.go:94] pod "etcd-old-k8s-version-330914" is "Ready"
	I1026 15:12:23.511569 1086607 pod_ready.go:86] duration metric: took 5.033388ms for pod "etcd-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.515140 1086607 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.520139 1086607 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-330914" is "Ready"
	I1026 15:12:23.520198 1086607 pod_ready.go:86] duration metric: took 4.997939ms for pod "kube-apiserver-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.523393 1086607 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.700379 1086607 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-330914" is "Ready"
	I1026 15:12:23.700409 1086607 pod_ready.go:86] duration metric: took 176.992551ms for pod "kube-controller-manager-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:23.900733 1086607 pod_ready.go:83] waiting for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.299617 1086607 pod_ready.go:94] pod "kube-proxy-829lp" is "Ready"
	I1026 15:12:24.299649 1086607 pod_ready.go:86] duration metric: took 398.889482ms for pod "kube-proxy-829lp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.500562 1086607 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.900567 1086607 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-330914" is "Ready"
	I1026 15:12:24.900600 1086607 pod_ready.go:86] duration metric: took 400.008062ms for pod "kube-scheduler-old-k8s-version-330914" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:24.900617 1086607 pod_ready.go:40] duration metric: took 35.916930354s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:12:24.950321 1086607 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1026 15:12:24.955073 1086607 out.go:203] 
	W1026 15:12:24.956447 1086607 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:12:24.957576 1086607 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:12:24.958913 1086607 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-330914" cluster and "default" namespace by default
	I1026 15:12:22.927779 1094884 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:12:22.928008 1094884 start.go:159] libmachine.API.Create for "embed-certs-535130" (driver="docker")
	I1026 15:12:22.928043 1094884 client.go:168] LocalClient.Create starting
	I1026 15:12:22.928138 1094884 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:12:22.928224 1094884 main.go:141] libmachine: Decoding PEM data...
	I1026 15:12:22.928244 1094884 main.go:141] libmachine: Parsing certificate...
	I1026 15:12:22.928320 1094884 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:12:22.928345 1094884 main.go:141] libmachine: Decoding PEM data...
	I1026 15:12:22.928354 1094884 main.go:141] libmachine: Parsing certificate...
	I1026 15:12:22.928694 1094884 cli_runner.go:164] Run: docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:12:22.947434 1094884 cli_runner.go:211] docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:12:22.947544 1094884 network_create.go:284] running [docker network inspect embed-certs-535130] to gather additional debugging logs...
	I1026 15:12:22.947572 1094884 cli_runner.go:164] Run: docker network inspect embed-certs-535130
	W1026 15:12:22.965884 1094884 cli_runner.go:211] docker network inspect embed-certs-535130 returned with exit code 1
	I1026 15:12:22.965918 1094884 network_create.go:287] error running [docker network inspect embed-certs-535130]: docker network inspect embed-certs-535130: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-535130 not found
	I1026 15:12:22.965936 1094884 network_create.go:289] output of [docker network inspect embed-certs-535130]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-535130 not found
	
	** /stderr **
	I1026 15:12:22.966046 1094884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:12:22.985557 1094884 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:12:22.986359 1094884 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:12:22.987196 1094884 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:12:22.988126 1094884 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec6510}
	I1026 15:12:22.988153 1094884 network_create.go:124] attempt to create docker network embed-certs-535130 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 15:12:22.988258 1094884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-535130 embed-certs-535130
	I1026 15:12:23.053788 1094884 network_create.go:108] docker network embed-certs-535130 192.168.76.0/24 created
	I1026 15:12:23.053820 1094884 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-535130" container
	I1026 15:12:23.053922 1094884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:12:23.073511 1094884 cli_runner.go:164] Run: docker volume create embed-certs-535130 --label name.minikube.sigs.k8s.io=embed-certs-535130 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:12:23.092193 1094884 oci.go:103] Successfully created a docker volume embed-certs-535130
	I1026 15:12:23.092294 1094884 cli_runner.go:164] Run: docker run --rm --name embed-certs-535130-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-535130 --entrypoint /usr/bin/test -v embed-certs-535130:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:12:23.512406 1094884 oci.go:107] Successfully prepared a docker volume embed-certs-535130
	I1026 15:12:23.512440 1094884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:23.512464 1094884 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:12:23.512541 1094884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-535130:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1026 15:12:24.071766 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	W1026 15:12:26.570742 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	I1026 15:12:28.044544 1094884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-535130:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.531951929s)
	I1026 15:12:28.044587 1094884 kic.go:203] duration metric: took 4.532116219s to extract preloaded images to volume ...
	W1026 15:12:28.044702 1094884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:12:28.044786 1094884 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:12:28.044853 1094884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:12:28.105477 1094884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-535130 --name embed-certs-535130 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-535130 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-535130 --network embed-certs-535130 --ip 192.168.76.2 --volume embed-certs-535130:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:12:28.395695 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Running}}
	I1026 15:12:28.416487 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:12:28.437229 1094884 cli_runner.go:164] Run: docker exec embed-certs-535130 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:12:28.483324 1094884 oci.go:144] the created container "embed-certs-535130" has a running status.
	I1026 15:12:28.483369 1094884 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa...
	I1026 15:12:29.157005 1094884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:12:29.183422 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:12:29.201144 1094884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:12:29.201180 1094884 kic_runner.go:114] Args: [docker exec --privileged embed-certs-535130 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:12:29.249224 1094884 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:12:29.269108 1094884 machine.go:93] provisionDockerMachine start ...
	I1026 15:12:29.269252 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:29.287870 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.288147 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:29.288181 1094884 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:12:29.432484 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:12:29.432520 1094884 ubuntu.go:182] provisioning hostname "embed-certs-535130"
	I1026 15:12:29.432600 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:29.451595 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.451814 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:29.451827 1094884 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-535130 && echo "embed-certs-535130" | sudo tee /etc/hostname
	I1026 15:12:29.605852 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:12:29.605944 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:29.625782 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:29.626088 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:29.626119 1094884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-535130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-535130/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-535130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:12:29.770338 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:12:29.770375 1094884 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:12:29.770428 1094884 ubuntu.go:190] setting up certificates
	I1026 15:12:29.770450 1094884 provision.go:84] configureAuth start
	I1026 15:12:29.770518 1094884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:12:29.789696 1094884 provision.go:143] copyHostCerts
	I1026 15:12:29.789762 1094884 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:12:29.789773 1094884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:12:29.789856 1094884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:12:29.789987 1094884 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:12:29.789999 1094884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:12:29.790049 1094884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:12:29.790145 1094884 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:12:29.790156 1094884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:12:29.790206 1094884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:12:29.790284 1094884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.embed-certs-535130 san=[127.0.0.1 192.168.76.2 embed-certs-535130 localhost minikube]
	I1026 15:12:30.082527 1094884 provision.go:177] copyRemoteCerts
	I1026 15:12:30.082582 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:12:30.082620 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.101581 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.204007 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:12:30.225022 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:12:30.242962 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:12:30.260627 1094884 provision.go:87] duration metric: took 490.157243ms to configureAuth
	I1026 15:12:30.260655 1094884 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:12:30.260857 1094884 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:30.260976 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.279328 1094884 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:30.279545 1094884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33842 <nil> <nil>}
	I1026 15:12:30.279561 1094884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:12:30.540929 1094884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:12:30.540953 1094884 machine.go:96] duration metric: took 1.27182251s to provisionDockerMachine
	I1026 15:12:30.540967 1094884 client.go:171] duration metric: took 7.612915574s to LocalClient.Create
	I1026 15:12:30.540991 1094884 start.go:167] duration metric: took 7.612983362s to libmachine.API.Create "embed-certs-535130"
	I1026 15:12:30.541001 1094884 start.go:293] postStartSetup for "embed-certs-535130" (driver="docker")
	I1026 15:12:30.541015 1094884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:12:30.541083 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:12:30.541145 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.560194 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.666065 1094884 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:12:30.669831 1094884 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:12:30.669865 1094884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:12:30.669877 1094884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:12:30.669933 1094884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:12:30.670044 1094884 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:12:30.670157 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:12:30.678218 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:12:30.700030 1094884 start.go:296] duration metric: took 159.014656ms for postStartSetup
	I1026 15:12:30.700424 1094884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:12:30.720118 1094884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json ...
	I1026 15:12:30.720413 1094884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:12:30.720465 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.739104 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.837679 1094884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:12:30.842561 1094884 start.go:128] duration metric: took 7.916843227s to createHost
	I1026 15:12:30.842593 1094884 start.go:83] releasing machines lock for "embed-certs-535130", held for 7.916973049s
	I1026 15:12:30.842682 1094884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:12:30.861500 1094884 ssh_runner.go:195] Run: cat /version.json
	I1026 15:12:30.861556 1094884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:12:30.861562 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.861619 1094884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:12:30.880085 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:30.880552 1094884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33842 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:12:31.043055 1094884 ssh_runner.go:195] Run: systemctl --version
	I1026 15:12:31.050442 1094884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:12:31.091997 1094884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:12:31.097046 1094884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:12:31.097112 1094884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:12:31.124040 1094884 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:12:31.124067 1094884 start.go:495] detecting cgroup driver to use...
	I1026 15:12:31.124106 1094884 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:12:31.124152 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:12:31.143171 1094884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:12:31.157567 1094884 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:12:31.157636 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:12:31.175501 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:12:31.195107 1094884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:12:31.280916 1094884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:12:31.370324 1094884 docker.go:234] disabling docker service ...
	I1026 15:12:31.370389 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:12:31.391038 1094884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:12:31.405225 1094884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:12:31.494860 1094884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:12:31.581190 1094884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:12:31.595100 1094884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:12:31.610576 1094884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:12:31.610643 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.621702 1094884 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:12:31.621772 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.631933 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.641706 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.652631 1094884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:12:31.662065 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.672261 1094884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.687254 1094884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:31.697622 1094884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:12:31.705869 1094884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:12:31.714245 1094884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:12:31.797931 1094884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:12:31.907320 1094884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:12:31.907394 1094884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:12:31.911700 1094884 start.go:563] Will wait 60s for crictl version
	I1026 15:12:31.911755 1094884 ssh_runner.go:195] Run: which crictl
	I1026 15:12:31.916061 1094884 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:12:31.941571 1094884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:12:31.941644 1094884 ssh_runner.go:195] Run: crio --version
	I1026 15:12:31.971039 1094884 ssh_runner.go:195] Run: crio --version
	I1026 15:12:32.004653 1094884 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1026 15:12:28.572313 1087870 pod_ready.go:104] pod "coredns-66bc5c9577-knr22" is not "Ready", error: <nil>
	I1026 15:12:31.070880 1087870 pod_ready.go:94] pod "coredns-66bc5c9577-knr22" is "Ready"
	I1026 15:12:31.070915 1087870 pod_ready.go:86] duration metric: took 37.006499908s for pod "coredns-66bc5c9577-knr22" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.073905 1087870 pod_ready.go:83] waiting for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.078107 1087870 pod_ready.go:94] pod "etcd-no-preload-475081" is "Ready"
	I1026 15:12:31.078138 1087870 pod_ready.go:86] duration metric: took 4.207111ms for pod "etcd-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.080180 1087870 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.084392 1087870 pod_ready.go:94] pod "kube-apiserver-no-preload-475081" is "Ready"
	I1026 15:12:31.084426 1087870 pod_ready.go:86] duration metric: took 4.226805ms for pod "kube-apiserver-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.088708 1087870 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.269156 1087870 pod_ready.go:94] pod "kube-controller-manager-no-preload-475081" is "Ready"
	I1026 15:12:31.269206 1087870 pod_ready.go:86] duration metric: took 180.476065ms for pod "kube-controller-manager-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.468623 1087870 pod_ready.go:83] waiting for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:31.869293 1087870 pod_ready.go:94] pod "kube-proxy-smtlg" is "Ready"
	I1026 15:12:31.869330 1087870 pod_ready.go:86] duration metric: took 400.674816ms for pod "kube-proxy-smtlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:32.068930 1087870 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:32.468962 1087870 pod_ready.go:94] pod "kube-scheduler-no-preload-475081" is "Ready"
	I1026 15:12:32.468992 1087870 pod_ready.go:86] duration metric: took 400.035699ms for pod "kube-scheduler-no-preload-475081" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:12:32.469006 1087870 pod_ready.go:40] duration metric: took 38.40815001s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:12:32.526497 1087870 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:12:32.529763 1087870 out.go:179] * Done! kubectl is now configured to use "no-preload-475081" cluster and "default" namespace by default
	I1026 15:12:32.005880 1094884 cli_runner.go:164] Run: docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:12:32.024572 1094884 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:12:32.029206 1094884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:12:32.040870 1094884 kubeadm.go:883] updating cluster {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:12:32.041002 1094884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:32.041061 1094884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:12:32.075869 1094884 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:12:32.075897 1094884 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:12:32.075949 1094884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:12:32.102439 1094884 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:12:32.102468 1094884 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:12:32.102478 1094884 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:12:32.102571 1094884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-535130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:12:32.102633 1094884 ssh_runner.go:195] Run: crio config
	I1026 15:12:32.149754 1094884 cni.go:84] Creating CNI manager for ""
	I1026 15:12:32.149778 1094884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:32.149796 1094884 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:12:32.149823 1094884 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-535130 NodeName:embed-certs-535130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:12:32.149988 1094884 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-535130"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:12:32.150086 1094884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:12:32.158464 1094884 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:12:32.158526 1094884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:12:32.166272 1094884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:12:32.179046 1094884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:12:32.195352 1094884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:12:32.209747 1094884 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:12:32.213887 1094884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:12:32.224809 1094884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:12:32.308443 1094884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:12:32.338158 1094884 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130 for IP: 192.168.76.2
	I1026 15:12:32.338213 1094884 certs.go:195] generating shared ca certs ...
	I1026 15:12:32.338238 1094884 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.338410 1094884 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:12:32.338458 1094884 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:12:32.338469 1094884 certs.go:257] generating profile certs ...
	I1026 15:12:32.338529 1094884 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key
	I1026 15:12:32.338550 1094884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.crt with IP's: []
	I1026 15:12:32.566180 1094884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.crt ...
	I1026 15:12:32.566211 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.crt: {Name:mkd6d336e91342a08904be85dabf843a66ea95b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.566384 1094884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key ...
	I1026 15:12:32.566397 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key: {Name:mk4416c5b817100d65b64e109f73505f873e43f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.566477 1094884 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3
	I1026 15:12:32.566499 1094884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 15:12:32.754452 1094884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3 ...
	I1026 15:12:32.754486 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3: {Name:mkabb7862e92bef693c45258c1617506096cdb12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.754719 1094884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3 ...
	I1026 15:12:32.754740 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3: {Name:mk215afc3790eeabca9034d99e286de6a2066abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:32.754854 1094884 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt.abe399f3 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt
	I1026 15:12:32.755001 1094884 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key
	I1026 15:12:32.755099 1094884 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key
	I1026 15:12:32.755124 1094884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt with IP's: []
	I1026 15:12:33.207302 1094884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt ...
	I1026 15:12:33.207334 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt: {Name:mk113bd43484e2aa10efeeed24889f71d62785e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:33.207519 1094884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key ...
	I1026 15:12:33.207536 1094884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key: {Name:mkfed9208b9b01aa68dc5edcf9bb22e51125ffb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:33.207753 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:12:33.207808 1094884 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:12:33.207819 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:12:33.207838 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:12:33.207860 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:12:33.207882 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:12:33.207921 1094884 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:12:33.208583 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:12:33.227595 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:12:33.245914 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:12:33.265321 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:12:33.285592 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:12:33.304704 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:12:33.323649 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:12:33.344374 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:12:33.364210 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:12:33.385157 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:12:33.404256 1094884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:12:33.423316 1094884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:12:33.436366 1094884 ssh_runner.go:195] Run: openssl version
	I1026 15:12:33.442667 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:12:33.451487 1094884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:12:33.455627 1094884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:12:33.455683 1094884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:12:33.492656 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:12:33.502994 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:12:33.512795 1094884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:33.517090 1094884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:33.517196 1094884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:12:33.552991 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:12:33.563221 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:12:33.572631 1094884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:12:33.577148 1094884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:12:33.577254 1094884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:12:33.613124 1094884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:12:33.622923 1094884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:12:33.627486 1094884 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:12:33.627550 1094884 kubeadm.go:400] StartCluster: {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:33.627624 1094884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:12:33.627672 1094884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:12:33.657036 1094884 cri.go:89] found id: ""
	I1026 15:12:33.657097 1094884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:12:33.665274 1094884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:12:33.673313 1094884 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:12:33.673363 1094884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:12:33.681347 1094884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:12:33.681368 1094884 kubeadm.go:157] found existing configuration files:
	
	I1026 15:12:33.681408 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:12:33.689860 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:12:33.689914 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:12:33.698191 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:12:33.706608 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:12:33.706671 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:12:33.715007 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:12:33.724484 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:12:33.724552 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:12:33.732614 1094884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:12:33.740833 1094884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:12:33.740886 1094884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:12:33.748934 1094884 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:12:33.792631 1094884 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:12:33.792748 1094884 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:12:33.816843 1094884 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:12:33.816927 1094884 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:12:33.816973 1094884 kubeadm.go:318] OS: Linux
	I1026 15:12:33.817035 1094884 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:12:33.817132 1094884 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:12:33.817221 1094884 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:12:33.817300 1094884 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:12:33.817392 1094884 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:12:33.817469 1094884 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:12:33.817529 1094884 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:12:33.817610 1094884 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:12:33.878062 1094884 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:12:33.878236 1094884 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:12:33.878364 1094884 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:12:33.887538 1094884 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:12:33.889316 1094884 out.go:252]   - Generating certificates and keys ...
	I1026 15:12:33.889393 1094884 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:12:33.889456 1094884 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:12:34.292495 1094884 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:12:34.449436 1094884 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:12:34.657020 1094884 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:12:35.300215 1094884 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:12:35.661499 1094884 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:12:35.661692 1094884 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-535130 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:12:35.807387 1094884 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:12:35.807513 1094884 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-535130 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:12:35.865776 1094884 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:12:36.035254 1094884 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:12:36.141587 1094884 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:12:36.141681 1094884 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:12:36.336316 1094884 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:12:36.502661 1094884 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:12:37.100733 1094884 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:12:37.150513 1094884 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:12:37.345845 1094884 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:12:37.346412 1094884 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:12:37.350599 1094884 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:12:37.353330 1094884 out.go:252]   - Booting up control plane ...
	I1026 15:12:37.353462 1094884 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:12:37.353580 1094884 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:12:37.353685 1094884 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:12:37.367641 1094884 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:12:37.367803 1094884 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:12:37.375592 1094884 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:12:37.375779 1094884 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:12:37.375850 1094884 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:12:37.490942 1094884 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:12:37.491126 1094884 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:12:38.492667 1094884 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001913509s
	I1026 15:12:38.496939 1094884 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:12:38.497073 1094884 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1026 15:12:38.497244 1094884 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:12:38.497375 1094884 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:12:39.591056 1094884 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.094112128s
	I1026 15:12:40.707598 1094884 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.210802183s
	I1026 15:12:42.499034 1094884 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002082803s
	I1026 15:12:42.511656 1094884 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:12:42.524553 1094884 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:12:42.534988 1094884 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:12:42.535273 1094884 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-535130 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:12:42.547286 1094884 kubeadm.go:318] [bootstrap-token] Using token: h8l7rk.ibjtcabj9d111dnd
	I1026 15:12:42.548901 1094884 out.go:252]   - Configuring RBAC rules ...
	I1026 15:12:42.549035 1094884 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:12:42.553784 1094884 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:12:42.560384 1094884 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:12:42.563911 1094884 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:12:42.568442 1094884 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:12:42.571597 1094884 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:12:42.906085 1094884 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:12:43.322824 1094884 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:12:43.905989 1094884 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:12:43.907060 1094884 kubeadm.go:318] 
	I1026 15:12:43.907218 1094884 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:12:43.907242 1094884 kubeadm.go:318] 
	I1026 15:12:43.907383 1094884 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:12:43.907399 1094884 kubeadm.go:318] 
	I1026 15:12:43.907437 1094884 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:12:43.907506 1094884 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:12:43.907565 1094884 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:12:43.907574 1094884 kubeadm.go:318] 
	I1026 15:12:43.907632 1094884 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:12:43.907640 1094884 kubeadm.go:318] 
	I1026 15:12:43.907692 1094884 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:12:43.907700 1094884 kubeadm.go:318] 
	I1026 15:12:43.907756 1094884 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:12:43.907842 1094884 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:12:43.907920 1094884 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:12:43.907928 1094884 kubeadm.go:318] 
	I1026 15:12:43.908076 1094884 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:12:43.908190 1094884 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:12:43.908202 1094884 kubeadm.go:318] 
	I1026 15:12:43.908311 1094884 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token h8l7rk.ibjtcabj9d111dnd \
	I1026 15:12:43.908481 1094884 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 15:12:43.908515 1094884 kubeadm.go:318] 	--control-plane 
	I1026 15:12:43.908524 1094884 kubeadm.go:318] 
	I1026 15:12:43.908653 1094884 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:12:43.908666 1094884 kubeadm.go:318] 
	I1026 15:12:43.908777 1094884 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token h8l7rk.ibjtcabj9d111dnd \
	I1026 15:12:43.908921 1094884 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 15:12:43.912660 1094884 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 15:12:43.912827 1094884 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:12:43.912868 1094884 cni.go:84] Creating CNI manager for ""
	I1026 15:12:43.912883 1094884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:43.916334 1094884 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 26 15:12:03 no-preload-475081 crio[563]: time="2025-10-26T15:12:03.862123587Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:12:04 no-preload-475081 crio[563]: time="2025-10-26T15:12:04.06566861Z" level=info msg="Removing container: d1aac639d2701bd3d664d4238f3a67cc5f2da5687fca8c7c827325be59ee2400" id=725d5dd0-6c53-4cb3-9c8f-aae478ad05a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:04 no-preload-475081 crio[563]: time="2025-10-26T15:12:04.077654391Z" level=info msg="Removed container d1aac639d2701bd3d664d4238f3a67cc5f2da5687fca8c7c827325be59ee2400: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper" id=725d5dd0-6c53-4cb3-9c8f-aae478ad05a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.000686339Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b43c1373-3e7c-41c8-9f73-db211fa5204a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.001825245Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f0bfee04-9474-409d-9b39-2d7b06f7208b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.002953267Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper" id=8c8054de-9d87-4d69-abb7-a3977890dc18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.003105919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.008948387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.009520022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.050632772Z" level=info msg="Created container 0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper" id=8c8054de-9d87-4d69-abb7-a3977890dc18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.051366905Z" level=info msg="Starting container: 0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a" id=28f34e62-75dd-40a6-a6bf-505de2451577 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.053362728Z" level=info msg="Started container" PID=1754 containerID=0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper id=28f34e62-75dd-40a6-a6bf-505de2451577 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee23a609fc77d94a3f21344816eb15996c58960315304f4db639720d00c87218
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.111103774Z" level=info msg="Removing container: 030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d" id=1e64131b-cc85-492a-9d42-844594cb3e5c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.124351362Z" level=info msg="Removed container 030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper" id=1e64131b-cc85-492a-9d42-844594cb3e5c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.124675132Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8879b7e7-084a-4d82-9b52-df1d8be50751 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.125626161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4bdf3a24-388c-4c0d-a4d6-a83785d201e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.126672824Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=aa76af78-b791-4b41-827d-c9ecd1288fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.126812735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.131646461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.131826967Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/762bb6d6f287de0a8a74db036c353130536ffea687c9e2a4aae75ce3d9a941d7/merged/etc/passwd: no such file or directory"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.131852981Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/762bb6d6f287de0a8a74db036c353130536ffea687c9e2a4aae75ce3d9a941d7/merged/etc/group: no such file or directory"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.132095533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.171524504Z" level=info msg="Created container 25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2: kube-system/storage-provisioner/storage-provisioner" id=aa76af78-b791-4b41-827d-c9ecd1288fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.17222666Z" level=info msg="Starting container: 25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2" id=423ad62b-e367-4881-aeca-2a6207e2af05 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.174044058Z" level=info msg="Started container" PID=1768 containerID=25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2 description=kube-system/storage-provisioner/storage-provisioner id=423ad62b-e367-4881-aeca-2a6207e2af05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=20dcd5eeafe71baebbe702cbaa4dca8e5066d28e6bf9c8e35adfbd791a305fcf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	25eb572506b0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   20dcd5eeafe71       storage-provisioner                          kube-system
	0ec84e9db5352       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   ee23a609fc77d       dashboard-metrics-scraper-6ffb444bf9-4ss9k   kubernetes-dashboard
	e82d8c73ee208       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   d0c7581fdc437       kubernetes-dashboard-855c9754f9-swr7t        kubernetes-dashboard
	9e9e9731eb664       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   13c7f560aeb59       busybox                                      default
	c9a8e6dbea2af       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   b768c5151c84f       coredns-66bc5c9577-knr22                     kube-system
	8db7d27d5a317       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   4d5aad78500e7       kindnet-7cnvx                                kube-system
	fd565f0a0c107       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   20dcd5eeafe71       storage-provisioner                          kube-system
	4128b713bc3da       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   42c59105ffefb       kube-proxy-smtlg                             kube-system
	55addbe4a3d90       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   d5dc0a4427687       kube-controller-manager-no-preload-475081    kube-system
	72798a668fb70       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   76caa198cfa6f       etcd-no-preload-475081                       kube-system
	c9e1c6df0d421       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   81224d2f390fd       kube-apiserver-no-preload-475081             kube-system
	ca6f184d3a6d0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   0185635e166bb       kube-scheduler-no-preload-475081             kube-system
	
	
	==> coredns [c9a8e6dbea2afb3687eae7cf2bdc70948901868e83cb61c6a3824c7badb8f216] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57161 - 59681 "HINFO IN 4383350840677580000.7042701348998389816. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084423885s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-475081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-475081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=no-preload-475081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_10_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:10:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-475081
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:12:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:12:33 +0000   Sun, 26 Oct 2025 15:10:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:12:33 +0000   Sun, 26 Oct 2025 15:10:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:12:33 +0000   Sun, 26 Oct 2025 15:10:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:12:33 +0000   Sun, 26 Oct 2025 15:11:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-475081
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                27d383f0-839c-47db-b23d-2fb7490add92
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-knr22                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-no-preload-475081                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-7cnvx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-475081              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-475081     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-smtlg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-475081              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4ss9k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-swr7t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node no-preload-475081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node no-preload-475081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node no-preload-475081 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node no-preload-475081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node no-preload-475081 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node no-preload-475081 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           107s                 node-controller  Node no-preload-475081 event: Registered Node no-preload-475081 in Controller
	  Normal  NodeReady                93s                  kubelet          Node no-preload-475081 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 58s)    kubelet          Node no-preload-475081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 58s)    kubelet          Node no-preload-475081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 58s)    kubelet          Node no-preload-475081 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node no-preload-475081 event: Registered Node no-preload-475081 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [72798a668fb70570d7f8691079339c46937ad357412930fe98e931819114ad86] <==
	{"level":"warn","ts":"2025-10-26T15:11:51.759399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.767491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.775020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.790844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.805940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.813344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.819551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.826684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.833941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.841563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.854298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.861215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.867764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.875201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.882904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.890208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.898563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.905638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.912414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.918924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.926341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.944494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.948523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.962925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:52.016083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43612","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:12:47 up  2:55,  0 user,  load average: 2.34, 2.40, 1.67
	Linux no-preload-475081 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8db7d27d5a3179e5157c946470653bdf1401a6583999cfe0b6e584dbd4aa55da] <==
	I1026 15:11:53.559143       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:11:53.630653       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 15:11:53.630898       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:11:53.630921       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:11:53.630958       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:11:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:11:53.856310       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:11:53.856408       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:11:53.856435       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:11:53.956000       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:11:54.256071       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:11:54.256110       1 metrics.go:72] Registering metrics
	I1026 15:11:54.256217       1 controller.go:711] "Syncing nftables rules"
	I1026 15:12:03.833780       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:03.833889       1 main.go:301] handling current node
	I1026 15:12:13.834425       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:13.834510       1 main.go:301] handling current node
	I1026 15:12:23.833514       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:23.833580       1 main.go:301] handling current node
	I1026 15:12:33.834207       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:33.834257       1 main.go:301] handling current node
	I1026 15:12:43.842257       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:43.842290       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c9e1c6df0d421d98d9ed1fd66b6c86206eb9055c5559f467f5d78f9891d1b67b] <==
	I1026 15:11:52.500548       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:11:52.500573       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:11:52.500599       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:11:52.500606       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:11:52.500721       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:11:52.501047       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:11:52.501216       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:11:52.501340       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:11:52.501703       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 15:11:52.506542       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 15:11:52.506605       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1026 15:11:52.508486       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:11:52.508709       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:11:52.556604       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:11:52.787898       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:11:52.823794       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:11:52.846215       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:11:52.854045       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:11:52.865139       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:11:52.907204       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.79.177"}
	I1026 15:11:52.918040       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.115.201"}
	I1026 15:11:53.404919       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:11:56.247953       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:11:56.346689       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:11:56.396973       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [55addbe4a3d90ebe69842fd45024bf12a7a8de8c7e93f05e1323f03b190d25ec] <==
	I1026 15:11:55.825548       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:11:55.825595       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:11:55.825602       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:11:55.825607       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:11:55.827807       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:11:55.830086       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:11:55.831552       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:11:55.838862       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 15:11:55.843070       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:11:55.843120       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:11:55.843123       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:11:55.843136       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:11:55.843136       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:11:55.843250       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:11:55.843347       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-475081"
	I1026 15:11:55.843429       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:11:55.843479       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:11:55.843567       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:11:55.844592       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:11:55.845775       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:11:55.848399       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:11:55.849598       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:11:55.849644       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:11:55.853915       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:11:55.872375       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4128b713bc3dafdb515ba3846752fb30d1e3a80d54c49f4b46aa2506000b8235] <==
	I1026 15:11:53.408993       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:11:53.496630       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:11:53.596779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:11:53.596812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 15:11:53.596966       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:11:53.618915       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:11:53.618979       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:11:53.624517       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:11:53.624927       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:11:53.624958       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:11:53.627894       1 config.go:200] "Starting service config controller"
	I1026 15:11:53.627920       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:11:53.627926       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:11:53.627936       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:11:53.627956       1 config.go:309] "Starting node config controller"
	I1026 15:11:53.627961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:11:53.628098       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:11:53.628121       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:11:53.728938       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:11:53.728956       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:11:53.728935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:11:53.729009       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ca6f184d3a6d0f2f0031a61280ff5266dd116d977154881018aa85f3aa81d941] <==
	I1026 15:11:51.148030       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:11:52.422616       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:11:52.422777       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:11:52.422800       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:11:52.422831       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:11:52.462746       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:11:52.462798       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:11:52.466104       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:11:52.466264       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:11:52.466272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:11:52.466446       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:11:52.567081       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:11:56 no-preload-475081 kubelet[711]: I1026 15:11:56.577646     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/efaa899d-cf84-4a9b-b57a-cf83dc11107f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4ss9k\" (UID: \"efaa899d-cf84-4a9b-b57a-cf83dc11107f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k"
	Oct 26 15:11:56 no-preload-475081 kubelet[711]: I1026 15:11:56.577669     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkzsj\" (UniqueName: \"kubernetes.io/projected/efaa899d-cf84-4a9b-b57a-cf83dc11107f-kube-api-access-zkzsj\") pod \"dashboard-metrics-scraper-6ffb444bf9-4ss9k\" (UID: \"efaa899d-cf84-4a9b-b57a-cf83dc11107f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k"
	Oct 26 15:12:00 no-preload-475081 kubelet[711]: I1026 15:12:00.960715     711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:12:01 no-preload-475081 kubelet[711]: I1026 15:12:01.072812     711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-swr7t" podStartSLOduration=1.785409121 podStartE2EDuration="5.072771669s" podCreationTimestamp="2025-10-26 15:11:56 +0000 UTC" firstStartedPulling="2025-10-26 15:11:56.798935571 +0000 UTC m=+6.893230430" lastFinishedPulling="2025-10-26 15:12:00.086298119 +0000 UTC m=+10.180592978" observedRunningTime="2025-10-26 15:12:01.071968626 +0000 UTC m=+11.166263500" watchObservedRunningTime="2025-10-26 15:12:01.072771669 +0000 UTC m=+11.167066535"
	Oct 26 15:12:03 no-preload-475081 kubelet[711]: I1026 15:12:03.058411     711 scope.go:117] "RemoveContainer" containerID="d1aac639d2701bd3d664d4238f3a67cc5f2da5687fca8c7c827325be59ee2400"
	Oct 26 15:12:04 no-preload-475081 kubelet[711]: I1026 15:12:04.064153     711 scope.go:117] "RemoveContainer" containerID="d1aac639d2701bd3d664d4238f3a67cc5f2da5687fca8c7c827325be59ee2400"
	Oct 26 15:12:04 no-preload-475081 kubelet[711]: I1026 15:12:04.064332     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:04 no-preload-475081 kubelet[711]: E1026 15:12:04.064548     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:05 no-preload-475081 kubelet[711]: I1026 15:12:05.069147     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:05 no-preload-475081 kubelet[711]: E1026 15:12:05.069396     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:06 no-preload-475081 kubelet[711]: I1026 15:12:06.071696     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:06 no-preload-475081 kubelet[711]: E1026 15:12:06.071959     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:20 no-preload-475081 kubelet[711]: I1026 15:12:20.000250     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:20 no-preload-475081 kubelet[711]: I1026 15:12:20.109463     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:20 no-preload-475081 kubelet[711]: I1026 15:12:20.109746     711 scope.go:117] "RemoveContainer" containerID="0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a"
	Oct 26 15:12:20 no-preload-475081 kubelet[711]: E1026 15:12:20.109986     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:24 no-preload-475081 kubelet[711]: I1026 15:12:24.124248     711 scope.go:117] "RemoveContainer" containerID="fd565f0a0c107feb72e7717ce5647b8b1b147ee26d5fbd64db46807decac800f"
	Oct 26 15:12:24 no-preload-475081 kubelet[711]: I1026 15:12:24.191359     711 scope.go:117] "RemoveContainer" containerID="0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a"
	Oct 26 15:12:24 no-preload-475081 kubelet[711]: E1026 15:12:24.191530     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:40 no-preload-475081 kubelet[711]: I1026 15:12:40.002639     711 scope.go:117] "RemoveContainer" containerID="0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a"
	Oct 26 15:12:40 no-preload-475081 kubelet[711]: E1026 15:12:40.003412     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:44 no-preload-475081 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:12:44 no-preload-475081 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:12:44 no-preload-475081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:12:44 no-preload-475081 systemd[1]: kubelet.service: Consumed 1.782s CPU time.
	
	
	==> kubernetes-dashboard [e82d8c73ee208681c543afe0a6794823e783821144b3f8cfefc86d3f34178a92] <==
	2025/10/26 15:12:00 Starting overwatch
	2025/10/26 15:12:00 Using namespace: kubernetes-dashboard
	2025/10/26 15:12:00 Using in-cluster config to connect to apiserver
	2025/10/26 15:12:00 Using secret token for csrf signing
	2025/10/26 15:12:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:12:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:12:00 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:12:00 Generating JWE encryption key
	2025/10/26 15:12:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:12:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:12:00 Initializing JWE encryption key from synchronized object
	2025/10/26 15:12:00 Creating in-cluster Sidecar client
	2025/10/26 15:12:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:12:00 Serving insecurely on HTTP port: 9090
	2025/10/26 15:12:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2] <==
	I1026 15:12:24.188239       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:12:24.197364       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:12:24.197447       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:12:24.199617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:27.654528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:31.915530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:35.513940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:38.568308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:41.590571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:41.595259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:12:41.595410       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:12:41.595613       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-475081_5a6149fd-5a97-4ffc-a14f-7709e98ae21e!
	I1026 15:12:41.595569       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a47d1b1-8ba0-4362-958c-984ac082c96f", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-475081_5a6149fd-5a97-4ffc-a14f-7709e98ae21e became leader
	W1026 15:12:41.597804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:41.603481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:12:41.695897       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-475081_5a6149fd-5a97-4ffc-a14f-7709e98ae21e!
	W1026 15:12:43.606500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:43.610891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:45.614033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:45.623413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fd565f0a0c107feb72e7717ce5647b8b1b147ee26d5fbd64db46807decac800f] <==
	I1026 15:11:53.376614       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:12:23.380569       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 15:12:46.399638 1100041 logs.go:261] failed to output audit logs: failed to create audit report: failed to convert logs to rows: failed to unmarshal "{\"specversion\":\"1.0\",\"id\":\"b606e25c-5c10-4fcd-866f-47d5ce262dfd\",\"source\":\"https://minikube.sigs.k8s.io/\",\"type\":\"io.k8s.sigs.minikube.audit\",\"datacontenttype\":\"application/json\",\"data\":{\"args\":\"ha-068218 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml\",\"command\":\"kubectl\",\"endTime\":\"26 Oct 25 14:46 UTC\",\"id\":\"0761981e-1699-4e8d-9b12-db92260a4353\",\"profile\":\"ha-068218\",\"startTime\":\"26 Oct 25 14:46 UTC\",\"user\":\"jenkins\",\"version\":\"v1.37.0\"}": unexpected end of JSON input

                                                
                                                
** /stderr **
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-475081 -n no-preload-475081
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-475081 -n no-preload-475081: exit status 2 (385.415851ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-475081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-475081
helpers_test.go:243: (dbg) docker inspect no-preload-475081:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f",
	        "Created": "2025-10-26T15:10:28.066508779Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1088209,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:11:43.497543651Z",
	            "FinishedAt": "2025-10-26T15:11:42.544687575Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/hosts",
	        "LogPath": "/var/lib/docker/containers/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f/5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f-json.log",
	        "Name": "/no-preload-475081",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-475081:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-475081",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e55f49a3db72f1b24108085ea7f4b5e53553ce1ef7c1d5f10ad348de3f9ba2f",
	                "LowerDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d8f134ee6ffed6d774f4544c7c284f648de8e02713b44278cfa81aa87432fd1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-475081",
	                "Source": "/var/lib/docker/volumes/no-preload-475081/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-475081",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-475081",
	                "name.minikube.sigs.k8s.io": "no-preload-475081",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e69a5fcb8a91227eb3b34dd1354020240fb88cb657cb102df54e5bb652f6290",
	            "SandboxKey": "/var/run/docker/netns/3e69a5fcb8a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33839"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-475081": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:a3:a9:68:2e:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da1bd6d7ce5203f11d1c54a9875cb6a6358a5bc321289fcb416f235a12121f07",
	                    "EndpointID": "71ace854f184b11ef48ec7244b9301bcd1a7f8995158afa343218d64554aff2b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-475081",
	                        "5e55f49a3db7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475081 -n no-preload-475081
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475081 -n no-preload-475081: exit status 2 (378.598356ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-475081 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-475081 logs -n 25: (1.403947515s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ ssh     │ cert-options-124833 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-124833          │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ ssh     │ -p cert-options-124833 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-124833          │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ delete  │ -p cert-options-124833                                                                                                                                                                                                                        │ cert-options-124833          │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:10 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:10 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-330914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ stop    │ -p old-k8s-version-330914 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable metrics-server -p no-preload-475081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ stop    │ -p no-preload-475081 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-330914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ addons  │ enable dashboard -p no-preload-475081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p kubernetes-upgrade-176599                                                                                                                                                                                                                  │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ image   │ old-k8s-version-330914 image list --format=json                                                                                                                                                                                               │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p old-k8s-version-330914 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ image   │ no-preload-475081 image list --format=json                                                                                                                                                                                                    │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p no-preload-475081 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p disable-driver-mounts-619402                                                                                                                                                                                                               │ disable-driver-mounts-619402 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:12:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:12:46.913282 1100384 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:12:46.913615 1100384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:46.913629 1100384 out.go:374] Setting ErrFile to fd 2...
	I1026 15:12:46.913635 1100384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:46.913903 1100384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:12:46.914614 1100384 out.go:368] Setting JSON to false
	I1026 15:12:46.915906 1100384 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10515,"bootTime":1761481052,"procs":382,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:12:46.916008 1100384 start.go:141] virtualization: kvm guest
	I1026 15:12:46.918294 1100384 out.go:179] * [default-k8s-diff-port-790012] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:12:46.920082 1100384 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:12:46.920097 1100384 notify.go:220] Checking for updates...
	I1026 15:12:46.922456 1100384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:12:46.923777 1100384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:12:46.925082 1100384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:12:46.926246 1100384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:12:46.927604 1100384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:12:46.929424 1100384 config.go:182] Loaded profile config "cert-expiration-619245": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:46.929593 1100384 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:46.929731 1100384 config.go:182] Loaded profile config "no-preload-475081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:46.929876 1100384 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:12:46.959994 1100384 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:12:46.960094 1100384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:47.026290 1100384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:12:47.01297868 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:47.026416 1100384 docker.go:318] overlay module found
	I1026 15:12:47.028542 1100384 out.go:179] * Using the docker driver based on user configuration
	I1026 15:12:47.030036 1100384 start.go:305] selected driver: docker
	I1026 15:12:47.030068 1100384 start.go:925] validating driver "docker" against <nil>
	I1026 15:12:47.030082 1100384 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:12:47.030898 1100384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:12:47.095517 1100384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:12:47.083350967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:12:47.095686 1100384 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:12:47.095934 1100384 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:12:47.097469 1100384 out.go:179] * Using Docker driver with root privileges
	I1026 15:12:47.098755 1100384 cni.go:84] Creating CNI manager for ""
	I1026 15:12:47.098856 1100384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:12:47.098870 1100384 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:12:47.098966 1100384 start.go:349] cluster config:
	{Name:default-k8s-diff-port-790012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-790012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:47.100523 1100384 out.go:179] * Starting "default-k8s-diff-port-790012" primary control-plane node in "default-k8s-diff-port-790012" cluster
	I1026 15:12:47.101838 1100384 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:12:47.103453 1100384 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:12:47.104707 1100384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:47.104796 1100384 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:12:47.104813 1100384 cache.go:58] Caching tarball of preloaded images
	I1026 15:12:47.104813 1100384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:12:47.104928 1100384 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:12:47.104946 1100384 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:12:47.105147 1100384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/default-k8s-diff-port-790012/config.json ...
	I1026 15:12:47.105213 1100384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/default-k8s-diff-port-790012/config.json: {Name:mk4b71cdf44a18f1da68cec21c669ee97405d0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:12:47.129324 1100384 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:12:47.129349 1100384 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:12:47.129372 1100384 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:12:47.129415 1100384 start.go:360] acquireMachinesLock for default-k8s-diff-port-790012: {Name:mk4d989509691b9ea8e9427c8b09bc286b8cef4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:12:47.129536 1100384 start.go:364] duration metric: took 98.433µs to acquireMachinesLock for "default-k8s-diff-port-790012"
	I1026 15:12:47.129567 1100384 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-790012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-790012 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:12:47.129659 1100384 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:12:43.917639 1094884 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:12:43.923011 1094884 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:12:43.923060 1094884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:12:43.938402 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:12:44.207937 1094884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:12:44.208023 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:44.208126 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-535130 minikube.k8s.io/updated_at=2025_10_26T15_12_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=embed-certs-535130 minikube.k8s.io/primary=true
	I1026 15:12:44.220806 1094884 ops.go:34] apiserver oom_adj: -16
	I1026 15:12:44.294977 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:44.795989 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:45.296105 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:45.795419 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:46.295118 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:46.795297 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:12:47.295367 1094884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 26 15:12:03 no-preload-475081 crio[563]: time="2025-10-26T15:12:03.862123587Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:12:04 no-preload-475081 crio[563]: time="2025-10-26T15:12:04.06566861Z" level=info msg="Removing container: d1aac639d2701bd3d664d4238f3a67cc5f2da5687fca8c7c827325be59ee2400" id=725d5dd0-6c53-4cb3-9c8f-aae478ad05a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:04 no-preload-475081 crio[563]: time="2025-10-26T15:12:04.077654391Z" level=info msg="Removed container d1aac639d2701bd3d664d4238f3a67cc5f2da5687fca8c7c827325be59ee2400: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper" id=725d5dd0-6c53-4cb3-9c8f-aae478ad05a4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.000686339Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b43c1373-3e7c-41c8-9f73-db211fa5204a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.001825245Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f0bfee04-9474-409d-9b39-2d7b06f7208b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.002953267Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper" id=8c8054de-9d87-4d69-abb7-a3977890dc18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.003105919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.008948387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.009520022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.050632772Z" level=info msg="Created container 0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper" id=8c8054de-9d87-4d69-abb7-a3977890dc18 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.051366905Z" level=info msg="Starting container: 0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a" id=28f34e62-75dd-40a6-a6bf-505de2451577 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.053362728Z" level=info msg="Started container" PID=1754 containerID=0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper id=28f34e62-75dd-40a6-a6bf-505de2451577 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee23a609fc77d94a3f21344816eb15996c58960315304f4db639720d00c87218
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.111103774Z" level=info msg="Removing container: 030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d" id=1e64131b-cc85-492a-9d42-844594cb3e5c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:20 no-preload-475081 crio[563]: time="2025-10-26T15:12:20.124351362Z" level=info msg="Removed container 030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k/dashboard-metrics-scraper" id=1e64131b-cc85-492a-9d42-844594cb3e5c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.124675132Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8879b7e7-084a-4d82-9b52-df1d8be50751 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.125626161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4bdf3a24-388c-4c0d-a4d6-a83785d201e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.126672824Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=aa76af78-b791-4b41-827d-c9ecd1288fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.126812735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.131646461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.131826967Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/762bb6d6f287de0a8a74db036c353130536ffea687c9e2a4aae75ce3d9a941d7/merged/etc/passwd: no such file or directory"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.131852981Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/762bb6d6f287de0a8a74db036c353130536ffea687c9e2a4aae75ce3d9a941d7/merged/etc/group: no such file or directory"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.132095533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.171524504Z" level=info msg="Created container 25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2: kube-system/storage-provisioner/storage-provisioner" id=aa76af78-b791-4b41-827d-c9ecd1288fc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.17222666Z" level=info msg="Starting container: 25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2" id=423ad62b-e367-4881-aeca-2a6207e2af05 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:12:24 no-preload-475081 crio[563]: time="2025-10-26T15:12:24.174044058Z" level=info msg="Started container" PID=1768 containerID=25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2 description=kube-system/storage-provisioner/storage-provisioner id=423ad62b-e367-4881-aeca-2a6207e2af05 name=/runtime.v1.RuntimeService/StartContainer sandboxID=20dcd5eeafe71baebbe702cbaa4dca8e5066d28e6bf9c8e35adfbd791a305fcf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	25eb572506b0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   20dcd5eeafe71       storage-provisioner                          kube-system
	0ec84e9db5352       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   ee23a609fc77d       dashboard-metrics-scraper-6ffb444bf9-4ss9k   kubernetes-dashboard
	e82d8c73ee208       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   d0c7581fdc437       kubernetes-dashboard-855c9754f9-swr7t        kubernetes-dashboard
	9e9e9731eb664       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   13c7f560aeb59       busybox                                      default
	c9a8e6dbea2af       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   b768c5151c84f       coredns-66bc5c9577-knr22                     kube-system
	8db7d27d5a317       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   4d5aad78500e7       kindnet-7cnvx                                kube-system
	fd565f0a0c107       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   20dcd5eeafe71       storage-provisioner                          kube-system
	4128b713bc3da       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   42c59105ffefb       kube-proxy-smtlg                             kube-system
	55addbe4a3d90       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   d5dc0a4427687       kube-controller-manager-no-preload-475081    kube-system
	72798a668fb70       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   76caa198cfa6f       etcd-no-preload-475081                       kube-system
	c9e1c6df0d421       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   81224d2f390fd       kube-apiserver-no-preload-475081             kube-system
	ca6f184d3a6d0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   0185635e166bb       kube-scheduler-no-preload-475081             kube-system
	
	
	==> coredns [c9a8e6dbea2afb3687eae7cf2bdc70948901868e83cb61c6a3824c7badb8f216] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57161 - 59681 "HINFO IN 4383350840677580000.7042701348998389816. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084423885s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-475081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-475081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=no-preload-475081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_10_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:10:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-475081
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:12:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:12:33 +0000   Sun, 26 Oct 2025 15:10:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:12:33 +0000   Sun, 26 Oct 2025 15:10:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:12:33 +0000   Sun, 26 Oct 2025 15:10:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:12:33 +0000   Sun, 26 Oct 2025 15:11:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-475081
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                27d383f0-839c-47db-b23d-2fb7490add92
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-knr22                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-475081                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-7cnvx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-475081              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-475081     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-smtlg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-475081              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4ss9k    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-swr7t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node no-preload-475081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node no-preload-475081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node no-preload-475081 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node no-preload-475081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node no-preload-475081 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node no-preload-475081 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           109s                 node-controller  Node no-preload-475081 event: Registered Node no-preload-475081 in Controller
	  Normal  NodeReady                95s                  kubelet          Node no-preload-475081 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 60s)    kubelet          Node no-preload-475081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 60s)    kubelet          Node no-preload-475081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 60s)    kubelet          Node no-preload-475081 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                  node-controller  Node no-preload-475081 event: Registered Node no-preload-475081 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [72798a668fb70570d7f8691079339c46937ad357412930fe98e931819114ad86] <==
	{"level":"warn","ts":"2025-10-26T15:11:51.759399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.767491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.775020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.790844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.805940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.813344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.819551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.826684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.833941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.841563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.854298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.861215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.867764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.875201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.882904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.890208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.898563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.905638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.912414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.918924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.926341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.944494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.948523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:51.962925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:11:52.016083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43612","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:12:49 up  2:55,  0 user,  load average: 2.34, 2.40, 1.67
	Linux no-preload-475081 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8db7d27d5a3179e5157c946470653bdf1401a6583999cfe0b6e584dbd4aa55da] <==
	I1026 15:11:53.559143       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:11:53.630653       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 15:11:53.630898       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:11:53.630921       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:11:53.630958       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:11:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:11:53.856310       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:11:53.856408       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:11:53.856435       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:11:53.956000       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:11:54.256071       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:11:54.256110       1 metrics.go:72] Registering metrics
	I1026 15:11:54.256217       1 controller.go:711] "Syncing nftables rules"
	I1026 15:12:03.833780       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:03.833889       1 main.go:301] handling current node
	I1026 15:12:13.834425       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:13.834510       1 main.go:301] handling current node
	I1026 15:12:23.833514       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:23.833580       1 main.go:301] handling current node
	I1026 15:12:33.834207       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:33.834257       1 main.go:301] handling current node
	I1026 15:12:43.842257       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 15:12:43.842290       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c9e1c6df0d421d98d9ed1fd66b6c86206eb9055c5559f467f5d78f9891d1b67b] <==
	I1026 15:11:52.500548       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:11:52.500573       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:11:52.500599       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:11:52.500606       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:11:52.500721       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:11:52.501047       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:11:52.501216       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:11:52.501340       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:11:52.501703       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 15:11:52.506542       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 15:11:52.506605       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1026 15:11:52.508486       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:11:52.508709       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:11:52.556604       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:11:52.787898       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:11:52.823794       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:11:52.846215       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:11:52.854045       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:11:52.865139       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:11:52.907204       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.79.177"}
	I1026 15:11:52.918040       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.115.201"}
	I1026 15:11:53.404919       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:11:56.247953       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:11:56.346689       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:11:56.396973       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [55addbe4a3d90ebe69842fd45024bf12a7a8de8c7e93f05e1323f03b190d25ec] <==
	I1026 15:11:55.825548       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:11:55.825595       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:11:55.825602       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:11:55.825607       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:11:55.827807       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:11:55.830086       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:11:55.831552       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:11:55.838862       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 15:11:55.843070       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:11:55.843120       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:11:55.843123       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:11:55.843136       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:11:55.843136       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:11:55.843250       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:11:55.843347       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-475081"
	I1026 15:11:55.843429       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:11:55.843479       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:11:55.843567       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:11:55.844592       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:11:55.845775       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:11:55.848399       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:11:55.849598       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:11:55.849644       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:11:55.853915       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:11:55.872375       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4128b713bc3dafdb515ba3846752fb30d1e3a80d54c49f4b46aa2506000b8235] <==
	I1026 15:11:53.408993       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:11:53.496630       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:11:53.596779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:11:53.596812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 15:11:53.596966       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:11:53.618915       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:11:53.618979       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:11:53.624517       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:11:53.624927       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:11:53.624958       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:11:53.627894       1 config.go:200] "Starting service config controller"
	I1026 15:11:53.627920       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:11:53.627926       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:11:53.627936       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:11:53.627956       1 config.go:309] "Starting node config controller"
	I1026 15:11:53.627961       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:11:53.628098       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:11:53.628121       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:11:53.728938       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:11:53.728956       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:11:53.728935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:11:53.729009       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ca6f184d3a6d0f2f0031a61280ff5266dd116d977154881018aa85f3aa81d941] <==
	I1026 15:11:51.148030       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:11:52.422616       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:11:52.422777       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:11:52.422800       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:11:52.422831       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:11:52.462746       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:11:52.462798       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:11:52.466104       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:11:52.466264       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:11:52.466272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:11:52.466446       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:11:52.567081       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:11:56 no-preload-475081 kubelet[711]: I1026 15:11:56.577646     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/efaa899d-cf84-4a9b-b57a-cf83dc11107f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4ss9k\" (UID: \"efaa899d-cf84-4a9b-b57a-cf83dc11107f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k"
	Oct 26 15:11:56 no-preload-475081 kubelet[711]: I1026 15:11:56.577669     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkzsj\" (UniqueName: \"kubernetes.io/projected/efaa899d-cf84-4a9b-b57a-cf83dc11107f-kube-api-access-zkzsj\") pod \"dashboard-metrics-scraper-6ffb444bf9-4ss9k\" (UID: \"efaa899d-cf84-4a9b-b57a-cf83dc11107f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k"
	Oct 26 15:12:00 no-preload-475081 kubelet[711]: I1026 15:12:00.960715     711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:12:01 no-preload-475081 kubelet[711]: I1026 15:12:01.072812     711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-swr7t" podStartSLOduration=1.785409121 podStartE2EDuration="5.072771669s" podCreationTimestamp="2025-10-26 15:11:56 +0000 UTC" firstStartedPulling="2025-10-26 15:11:56.798935571 +0000 UTC m=+6.893230430" lastFinishedPulling="2025-10-26 15:12:00.086298119 +0000 UTC m=+10.180592978" observedRunningTime="2025-10-26 15:12:01.071968626 +0000 UTC m=+11.166263500" watchObservedRunningTime="2025-10-26 15:12:01.072771669 +0000 UTC m=+11.167066535"
	Oct 26 15:12:03 no-preload-475081 kubelet[711]: I1026 15:12:03.058411     711 scope.go:117] "RemoveContainer" containerID="d1aac639d2701bd3d664d4238f3a67cc5f2da5687fca8c7c827325be59ee2400"
	Oct 26 15:12:04 no-preload-475081 kubelet[711]: I1026 15:12:04.064153     711 scope.go:117] "RemoveContainer" containerID="d1aac639d2701bd3d664d4238f3a67cc5f2da5687fca8c7c827325be59ee2400"
	Oct 26 15:12:04 no-preload-475081 kubelet[711]: I1026 15:12:04.064332     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:04 no-preload-475081 kubelet[711]: E1026 15:12:04.064548     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:05 no-preload-475081 kubelet[711]: I1026 15:12:05.069147     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:05 no-preload-475081 kubelet[711]: E1026 15:12:05.069396     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:06 no-preload-475081 kubelet[711]: I1026 15:12:06.071696     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:06 no-preload-475081 kubelet[711]: E1026 15:12:06.071959     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:20 no-preload-475081 kubelet[711]: I1026 15:12:20.000250     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:20 no-preload-475081 kubelet[711]: I1026 15:12:20.109463     711 scope.go:117] "RemoveContainer" containerID="030556e7613924aa8dd049dacca3918881d73ed8494ae3aba6bd0483c4d9fa6d"
	Oct 26 15:12:20 no-preload-475081 kubelet[711]: I1026 15:12:20.109746     711 scope.go:117] "RemoveContainer" containerID="0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a"
	Oct 26 15:12:20 no-preload-475081 kubelet[711]: E1026 15:12:20.109986     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:24 no-preload-475081 kubelet[711]: I1026 15:12:24.124248     711 scope.go:117] "RemoveContainer" containerID="fd565f0a0c107feb72e7717ce5647b8b1b147ee26d5fbd64db46807decac800f"
	Oct 26 15:12:24 no-preload-475081 kubelet[711]: I1026 15:12:24.191359     711 scope.go:117] "RemoveContainer" containerID="0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a"
	Oct 26 15:12:24 no-preload-475081 kubelet[711]: E1026 15:12:24.191530     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:40 no-preload-475081 kubelet[711]: I1026 15:12:40.002639     711 scope.go:117] "RemoveContainer" containerID="0ec84e9db53528cf0f7266fd92440deff135d0675eb15c3c396aa1c80902cc8a"
	Oct 26 15:12:40 no-preload-475081 kubelet[711]: E1026 15:12:40.003412     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4ss9k_kubernetes-dashboard(efaa899d-cf84-4a9b-b57a-cf83dc11107f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4ss9k" podUID="efaa899d-cf84-4a9b-b57a-cf83dc11107f"
	Oct 26 15:12:44 no-preload-475081 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:12:44 no-preload-475081 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:12:44 no-preload-475081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:12:44 no-preload-475081 systemd[1]: kubelet.service: Consumed 1.782s CPU time.
	
	
	==> kubernetes-dashboard [e82d8c73ee208681c543afe0a6794823e783821144b3f8cfefc86d3f34178a92] <==
	2025/10/26 15:12:00 Starting overwatch
	2025/10/26 15:12:00 Using namespace: kubernetes-dashboard
	2025/10/26 15:12:00 Using in-cluster config to connect to apiserver
	2025/10/26 15:12:00 Using secret token for csrf signing
	2025/10/26 15:12:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:12:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:12:00 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:12:00 Generating JWE encryption key
	2025/10/26 15:12:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:12:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:12:00 Initializing JWE encryption key from synchronized object
	2025/10/26 15:12:00 Creating in-cluster Sidecar client
	2025/10/26 15:12:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:12:00 Serving insecurely on HTTP port: 9090
	2025/10/26 15:12:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [25eb572506b0f83109b10c039ec7e2b2de1dbc85f0c88659521085982e86afb2] <==
	I1026 15:12:24.188239       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:12:24.197364       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:12:24.197447       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:12:24.199617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:27.654528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:31.915530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:35.513940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:38.568308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:41.590571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:41.595259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:12:41.595410       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:12:41.595613       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-475081_5a6149fd-5a97-4ffc-a14f-7709e98ae21e!
	I1026 15:12:41.595569       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a47d1b1-8ba0-4362-958c-984ac082c96f", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-475081_5a6149fd-5a97-4ffc-a14f-7709e98ae21e became leader
	W1026 15:12:41.597804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:41.603481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:12:41.695897       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-475081_5a6149fd-5a97-4ffc-a14f-7709e98ae21e!
	W1026 15:12:43.606500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:43.610891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:45.614033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:45.623413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:47.629268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:47.637442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:49.641422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:12:49.646692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fd565f0a0c107feb72e7717ce5647b8b1b147ee26d5fbd64db46807decac800f] <==
	I1026 15:11:53.376614       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:12:23.380569       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-475081 -n no-preload-475081
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-475081 -n no-preload-475081: exit status 2 (362.233458ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-475081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-535130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-535130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (628.582437ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-535130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-535130 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-535130 describe deploy/metrics-server -n kube-system: exit status 1 (95.017843ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-535130 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-535130
helpers_test.go:243: (dbg) docker inspect embed-certs-535130:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36",
	        "Created": "2025-10-26T15:12:28.122091236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1095598,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:12:28.156292788Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/hostname",
	        "HostsPath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/hosts",
	        "LogPath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36-json.log",
	        "Name": "/embed-certs-535130",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-535130:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-535130",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36",
	                "LowerDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-535130",
	                "Source": "/var/lib/docker/volumes/embed-certs-535130/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-535130",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-535130",
	                "name.minikube.sigs.k8s.io": "embed-certs-535130",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44c174d893093d262939afeae2311ace0f2292b575123054dcf6123c65cdd6da",
	            "SandboxKey": "/var/run/docker/netns/44c174d89309",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-535130": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:41:0b:aa:d9:64",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c696734ed668df0fca3efb0f7c1c0265275f09b80d9a59f85ab28b09787295d5",
	                    "EndpointID": "713945b0c7aed54ed88122bc8597625f6145aa93f6780a81b0548802034f64e1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-535130",
	                        "51b1644009af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535130 -n embed-certs-535130
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-535130 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-535130 logs -n 25: (1.495890408s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-475081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │                     │
	│ stop    │ -p no-preload-475081 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-330914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ addons  │ enable dashboard -p no-preload-475081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p kubernetes-upgrade-176599                                                                                                                                                                                                                  │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ image   │ old-k8s-version-330914 image list --format=json                                                                                                                                                                                               │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p old-k8s-version-330914 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ image   │ no-preload-475081 image list --format=json                                                                                                                                                                                                    │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p no-preload-475081 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p disable-driver-mounts-619402                                                                                                                                                                                                               │ disable-driver-mounts-619402 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ start   │ -p cert-expiration-619245 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ delete  │ -p cert-expiration-619245                                                                                                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p auto-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-498531                  │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-535130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:13:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:13:06.563756 1107827 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:13:06.564037 1107827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:06.564048 1107827 out.go:374] Setting ErrFile to fd 2...
	I1026 15:13:06.564052 1107827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:06.564280 1107827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:13:06.564811 1107827 out.go:368] Setting JSON to false
	I1026 15:13:06.566075 1107827 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10535,"bootTime":1761481052,"procs":365,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:13:06.566200 1107827 start.go:141] virtualization: kvm guest
	I1026 15:13:06.568289 1107827 out.go:179] * [auto-498531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:13:06.570053 1107827 notify.go:220] Checking for updates...
	I1026 15:13:06.570069 1107827 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:13:06.571369 1107827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:13:06.572627 1107827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:06.573903 1107827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:13:06.578386 1107827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:13:06.579639 1107827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:13:06.581141 1107827 config.go:182] Loaded profile config "default-k8s-diff-port-790012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:06.581264 1107827 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:06.581415 1107827 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:06.581572 1107827 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:13:06.607576 1107827 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:13:06.607741 1107827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:06.671748 1107827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:13:06.660306344 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:06.671864 1107827 docker.go:318] overlay module found
	I1026 15:13:06.673623 1107827 out.go:179] * Using the docker driver based on user configuration
	I1026 15:13:06.675496 1107827 start.go:305] selected driver: docker
	I1026 15:13:06.675515 1107827 start.go:925] validating driver "docker" against <nil>
	I1026 15:13:06.675528 1107827 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:13:06.676157 1107827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:06.738503 1107827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:13:06.728252144 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:06.738684 1107827 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:13:06.738906 1107827 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:06.740536 1107827 out.go:179] * Using Docker driver with root privileges
	I1026 15:13:06.741758 1107827 cni.go:84] Creating CNI manager for ""
	I1026 15:13:06.741820 1107827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:06.741831 1107827 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:13:06.741891 1107827 start.go:349] cluster config:
	{Name:auto-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1026 15:13:06.743088 1107827 out.go:179] * Starting "auto-498531" primary control-plane node in "auto-498531" cluster
	I1026 15:13:06.744385 1107827 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:13:06.745802 1107827 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:13:06.746991 1107827 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:06.747041 1107827 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:13:06.747061 1107827 cache.go:58] Caching tarball of preloaded images
	I1026 15:13:06.747096 1107827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:13:06.747176 1107827 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:13:06.747193 1107827 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:13:06.747301 1107827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/config.json ...
	I1026 15:13:06.747328 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/config.json: {Name:mk51395bb2b43f058ea11f2c355376c86dda34ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:06.769312 1107827 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:13:06.769337 1107827 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:13:06.769355 1107827 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:13:06.769385 1107827 start.go:360] acquireMachinesLock for auto-498531: {Name:mk2fc728ab6ac55049fdc8daa1ba88be08fec125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:13:06.769488 1107827 start.go:364] duration metric: took 81.976µs to acquireMachinesLock for "auto-498531"
	I1026 15:13:06.769517 1107827 start.go:93] Provisioning new machine with config: &{Name:auto-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-498531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:06.769600 1107827 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:13:04.383076 1103368 out.go:252]   - Generating certificates and keys ...
	I1026 15:13:04.383156 1103368 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:13:04.383247 1103368 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:13:04.592061 1103368 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:13:04.690538 1103368 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:13:04.938410 1103368 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:13:05.119895 1103368 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:13:05.575678 1103368 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:13:05.575873 1103368 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450976] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1026 15:13:05.633558 1103368 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:13:05.633792 1103368 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450976] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1026 15:13:06.189292 1103368 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:13:06.503235 1103368 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:13:06.586480 1103368 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:13:06.586574 1103368 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:13:06.885001 1103368 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:13:07.150062 1103368 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:13:07.572037 1103368 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:13:07.627755 1103368 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:13:07.977735 1103368 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:13:07.978469 1103368 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:13:07.986445 1103368 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:13:07.988147 1103368 out.go:252]   - Booting up control plane ...
	I1026 15:13:07.988300 1103368 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:13:07.988414 1103368 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:13:07.989323 1103368 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:13:08.012783 1103368 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:13:08.012935 1103368 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:13:08.022381 1103368 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:13:08.023018 1103368 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:13:08.023095 1103368 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:13:08.165977 1103368 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:13:08.166258 1103368 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:13:06.772446 1107827 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:13:06.772651 1107827 start.go:159] libmachine.API.Create for "auto-498531" (driver="docker")
	I1026 15:13:06.772678 1107827 client.go:168] LocalClient.Create starting
	I1026 15:13:06.772772 1107827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:13:06.772839 1107827 main.go:141] libmachine: Decoding PEM data...
	I1026 15:13:06.772863 1107827 main.go:141] libmachine: Parsing certificate...
	I1026 15:13:06.772943 1107827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:13:06.772969 1107827 main.go:141] libmachine: Decoding PEM data...
	I1026 15:13:06.772986 1107827 main.go:141] libmachine: Parsing certificate...
	I1026 15:13:06.773367 1107827 cli_runner.go:164] Run: docker network inspect auto-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:13:06.794081 1107827 cli_runner.go:211] docker network inspect auto-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:13:06.794156 1107827 network_create.go:284] running [docker network inspect auto-498531] to gather additional debugging logs...
	I1026 15:13:06.794194 1107827 cli_runner.go:164] Run: docker network inspect auto-498531
	W1026 15:13:06.817612 1107827 cli_runner.go:211] docker network inspect auto-498531 returned with exit code 1
	I1026 15:13:06.817656 1107827 network_create.go:287] error running [docker network inspect auto-498531]: docker network inspect auto-498531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-498531 not found
	I1026 15:13:06.817675 1107827 network_create.go:289] output of [docker network inspect auto-498531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-498531 not found
	
	** /stderr **
	I1026 15:13:06.817796 1107827 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:06.838553 1107827 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:13:06.839352 1107827 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:13:06.840103 1107827 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:13:06.840681 1107827 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c696734ed668 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:9a:3a:13:85:1e} reservation:<nil>}
	I1026 15:13:06.841441 1107827 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-eb8db690bfd7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:80:70:9a:55:40} reservation:<nil>}
	I1026 15:13:06.842536 1107827 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8f030}
	I1026 15:13:06.842558 1107827 network_create.go:124] attempt to create docker network auto-498531 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1026 15:13:06.842600 1107827 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-498531 auto-498531
	I1026 15:13:06.903641 1107827 network_create.go:108] docker network auto-498531 192.168.94.0/24 created
	I1026 15:13:06.903672 1107827 kic.go:121] calculated static IP "192.168.94.2" for the "auto-498531" container
	I1026 15:13:06.903752 1107827 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:13:06.922096 1107827 cli_runner.go:164] Run: docker volume create auto-498531 --label name.minikube.sigs.k8s.io=auto-498531 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:13:06.940455 1107827 oci.go:103] Successfully created a docker volume auto-498531
	I1026 15:13:06.940553 1107827 cli_runner.go:164] Run: docker run --rm --name auto-498531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-498531 --entrypoint /usr/bin/test -v auto-498531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:13:07.355323 1107827 oci.go:107] Successfully prepared a docker volume auto-498531
	I1026 15:13:07.355376 1107827 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:07.355416 1107827 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:13:07.355476 1107827 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 26 15:13:00 embed-certs-535130 crio[779]: time="2025-10-26T15:13:00.347112114Z" level=info msg="Starting container: 45a896f5ab6a5a7fa20cd0a379c78f59f5907d63bd8730a24917f19e103e35d8" id=5fae4e32-5f2a-4dbf-a637-704e2d39a9f0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:00 embed-certs-535130 crio[779]: time="2025-10-26T15:13:00.350151853Z" level=info msg="Started container" PID=1844 containerID=45a896f5ab6a5a7fa20cd0a379c78f59f5907d63bd8730a24917f19e103e35d8 description=kube-system/coredns-66bc5c9577-pnbct/coredns id=5fae4e32-5f2a-4dbf-a637-704e2d39a9f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2149e78fc8ad386589ff10cd6395d9fc0db265430c18bdf9c3a922341e0a6216
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.456629691Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f3173f41-2fa4-42cc-8ddb-67b651cad7b2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.456748677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.463065062Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b831a3b0356aaba24044a339bf03b37b527d1290bb6bd50bdb0160c0f31e5fed UID:3a83ca98-1247-4189-b60f-6902a250ac9c NetNS:/var/run/netns/1f9d7bfe-650f-4bea-b365-09f1a956ba71 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000382150}] Aliases:map[]}"
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.463323352Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.474727043Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b831a3b0356aaba24044a339bf03b37b527d1290bb6bd50bdb0160c0f31e5fed UID:3a83ca98-1247-4189-b60f-6902a250ac9c NetNS:/var/run/netns/1f9d7bfe-650f-4bea-b365-09f1a956ba71 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000382150}] Aliases:map[]}"
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.474895578Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.475906608Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.477064179Z" level=info msg="Ran pod sandbox b831a3b0356aaba24044a339bf03b37b527d1290bb6bd50bdb0160c0f31e5fed with infra container: default/busybox/POD" id=f3173f41-2fa4-42cc-8ddb-67b651cad7b2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.478292095Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6f72455f-64ea-4c14-968c-8cdf6c6296b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.478434405Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6f72455f-64ea-4c14-968c-8cdf6c6296b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.47858241Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6f72455f-64ea-4c14-968c-8cdf6c6296b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.479424847Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3718197e-5f4a-4bad-a686-d1ab0caad72c name=/runtime.v1.ImageService/PullImage
	Oct 26 15:13:03 embed-certs-535130 crio[779]: time="2025-10-26T15:13:03.482601221Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.254426567Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3718197e-5f4a-4bad-a686-d1ab0caad72c name=/runtime.v1.ImageService/PullImage
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.25540423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=48944fd7-f2d3-4681-826c-83c5658c655b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.257102195Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=061c4cf7-ad5a-4bfc-a132-75146811582a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.261061479Z" level=info msg="Creating container: default/busybox/busybox" id=43bca2c3-770b-44c4-887d-3c7a6f5ac82e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.261183096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.266086541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.266612533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.29693312Z" level=info msg="Created container 805ddf29927e0992ee88fb82d5b632357218ae4cbe4b7fdc3f2ccde15db8d4e9: default/busybox/busybox" id=43bca2c3-770b-44c4-887d-3c7a6f5ac82e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.297765689Z" level=info msg="Starting container: 805ddf29927e0992ee88fb82d5b632357218ae4cbe4b7fdc3f2ccde15db8d4e9" id=92fe4a5b-b647-45fc-bd6b-c079030919a0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:04 embed-certs-535130 crio[779]: time="2025-10-26T15:13:04.300345952Z" level=info msg="Started container" PID=1918 containerID=805ddf29927e0992ee88fb82d5b632357218ae4cbe4b7fdc3f2ccde15db8d4e9 description=default/busybox/busybox id=92fe4a5b-b647-45fc-bd6b-c079030919a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b831a3b0356aaba24044a339bf03b37b527d1290bb6bd50bdb0160c0f31e5fed
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	805ddf29927e0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   b831a3b0356aa       busybox                                      default
	45a896f5ab6a5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   2149e78fc8ad3       coredns-66bc5c9577-pnbct                     kube-system
	64d56a1716889       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   2112a5356892d       storage-provisioner                          kube-system
	5ded35762d4db       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   7583a4bb4f72b       kindnet-mlqjm                                kube-system
	f14791912d88c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   35a30b4effbc0       kube-proxy-nbr2d                             kube-system
	c74fd553e1f33       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   b22b53a992838       kube-controller-manager-embed-certs-535130   kube-system
	b6d0c5b2e4d97       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   e0e4bddea15f7       kube-apiserver-embed-certs-535130            kube-system
	9246fe5ddf636       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   65b1d1f6dc7b3       etcd-embed-certs-535130                      kube-system
	07ee479de954f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   b984c372a8403       kube-scheduler-embed-certs-535130            kube-system
	
	
	==> coredns [45a896f5ab6a5a7fa20cd0a379c78f59f5907d63bd8730a24917f19e103e35d8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41355 - 37689 "HINFO IN 5647959654589288835.7417802037035777537. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.46592987s
	
	
	==> describe nodes <==
	Name:               embed-certs-535130
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-535130
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=embed-certs-535130
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_12_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:12:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-535130
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:13:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:12:59 +0000   Sun, 26 Oct 2025 15:12:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:12:59 +0000   Sun, 26 Oct 2025 15:12:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:12:59 +0000   Sun, 26 Oct 2025 15:12:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:12:59 +0000   Sun, 26 Oct 2025 15:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-535130
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d2eb1dd1-3767-46c2-b62f-7198c6aeeadd
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-pnbct                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-535130                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-mlqjm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-535130             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-535130    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-nbr2d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-535130             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-535130 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-535130 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-535130 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node embed-certs-535130 event: Registered Node embed-certs-535130 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-535130 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [9246fe5ddf636917cebbb4d283be8ff5a6834d2d2de118c7a10b0b09251e67fa] <==
	{"level":"warn","ts":"2025-10-26T15:12:40.019902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.026863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.033620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.040580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.048653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.056660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.064283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.071285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.078609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.085522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.094120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.104087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.110472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.117596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.124473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.131853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.139425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.146376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.153646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.176645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.183041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.189570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:12:40.242053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32802","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T15:12:50.647947Z","caller":"traceutil/trace.go:172","msg":"trace[1891384932] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"102.015688ms","start":"2025-10-26T15:12:50.545917Z","end":"2025-10-26T15:12:50.647933Z","steps":["trace[1891384932] 'process raft request'  (duration: 101.898463ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:12:51.436233Z","caller":"traceutil/trace.go:172","msg":"trace[149451951] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"133.268485ms","start":"2025-10-26T15:12:51.302948Z","end":"2025-10-26T15:12:51.436216Z","steps":["trace[149451951] 'process raft request'  (duration: 133.12137ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:13:13 up  2:55,  0 user,  load average: 3.42, 2.64, 1.77
	Linux embed-certs-535130 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5ded35762d4dbb24003cb2b4d15276615a118d0bc32e4a55c8731fda0df4c2eb] <==
	I1026 15:12:49.381440       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:12:49.381784       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:12:49.381978       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:12:49.382001       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:12:49.382024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:12:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:12:49.583867       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:12:49.584460       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:12:49.585181       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:12:49.585363       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:12:50.079776       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:12:50.079899       1 metrics.go:72] Registering metrics
	I1026 15:12:50.080018       1 controller.go:711] "Syncing nftables rules"
	I1026 15:12:59.589237       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:12:59.589305       1 main.go:301] handling current node
	I1026 15:13:09.586252       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:13:09.586314       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b6d0c5b2e4d975a2b899a6f86d049af3fdb14f3aaff60f614f21398b8d5ca842] <==
	E1026 15:12:40.795204       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1026 15:12:40.842270       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:12:40.850679       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:12:40.850797       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:12:40.858446       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:12:40.858735       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:12:40.938716       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:12:41.648296       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:12:41.653878       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:12:41.653906       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:12:42.233286       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:12:42.272290       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:12:42.353296       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:12:42.360540       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1026 15:12:42.362026       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:12:42.367336       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:12:42.665257       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:12:43.311425       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:12:43.321793       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:12:43.333288       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:12:47.870594       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:12:47.876919       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:12:48.618146       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:12:48.717866       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1026 15:13:11.253787       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:44198: use of closed network connection
	
	
	==> kube-controller-manager [c74fd553e1f33d403ce2380969b7aeb1113dcf23f98f3257b0e9327af2c1ce7d] <==
	I1026 15:12:47.634719       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-535130" podCIDRs=["10.244.0.0/24"]
	I1026 15:12:47.663839       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:12:47.663868       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 15:12:47.664001       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 15:12:47.664212       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:12:47.665430       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:12:47.665621       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 15:12:47.665675       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:12:47.665729       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 15:12:47.665785       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:12:47.666037       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:12:47.665635       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:12:47.666680       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:12:47.667158       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:12:47.668333       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:12:47.668470       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:12:47.671869       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:12:47.673041       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 15:12:47.686374       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:12:47.686520       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:12:47.686635       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-535130"
	I1026 15:12:47.686704       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 15:12:47.692722       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:12:47.700282       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:13:02.689223       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f14791912d88cc1a341a2d9092cece6b0b74a58e05de7dff1ccf3f03362788f0] <==
	I1026 15:12:49.206894       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:12:49.294505       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:12:49.395135       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:12:49.395210       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 15:12:49.395322       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:12:49.420069       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:12:49.420127       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:12:49.427204       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:12:49.427740       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:12:49.427790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:12:49.429465       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:12:49.429532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:12:49.429584       1 config.go:200] "Starting service config controller"
	I1026 15:12:49.429610       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:12:49.429512       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:12:49.429660       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:12:49.429551       1 config.go:309] "Starting node config controller"
	I1026 15:12:49.429746       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:12:49.530202       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:12:49.530222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:12:49.530239       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:12:49.530256       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [07ee479de954f02c68877d9c508e58351ff375de38b2f180b41ca757c4862da5] <==
	E1026 15:12:40.705572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:12:40.705603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:12:40.705639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:12:40.705640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:12:40.705824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:12:40.705891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:12:40.706031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:12:40.706054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:12:40.706144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:12:40.706155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:12:40.706241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:12:40.706421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:12:40.706437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:12:41.577685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 15:12:41.636549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:12:41.646671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:12:41.652047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:12:41.652147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:12:41.682774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:12:41.733673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:12:41.899575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:12:41.909741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:12:42.016633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:12:42.031031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1026 15:12:44.503263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:12:44 embed-certs-535130 kubelet[1316]: E1026 15:12:44.213812    1316 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-535130\" already exists" pod="kube-system/etcd-embed-certs-535130"
	Oct 26 15:12:44 embed-certs-535130 kubelet[1316]: I1026 15:12:44.248195    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-535130" podStartSLOduration=1.2481781889999999 podStartE2EDuration="1.248178189s" podCreationTimestamp="2025-10-26 15:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:12:44.23350721 +0000 UTC m=+1.156913407" watchObservedRunningTime="2025-10-26 15:12:44.248178189 +0000 UTC m=+1.171584353"
	Oct 26 15:12:44 embed-certs-535130 kubelet[1316]: I1026 15:12:44.268253    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-535130" podStartSLOduration=1.268224985 podStartE2EDuration="1.268224985s" podCreationTimestamp="2025-10-26 15:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:12:44.248594272 +0000 UTC m=+1.172000457" watchObservedRunningTime="2025-10-26 15:12:44.268224985 +0000 UTC m=+1.191631170"
	Oct 26 15:12:44 embed-certs-535130 kubelet[1316]: I1026 15:12:44.279094    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-535130" podStartSLOduration=1.279070791 podStartE2EDuration="1.279070791s" podCreationTimestamp="2025-10-26 15:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:12:44.269048308 +0000 UTC m=+1.192454493" watchObservedRunningTime="2025-10-26 15:12:44.279070791 +0000 UTC m=+1.202476976"
	Oct 26 15:12:44 embed-certs-535130 kubelet[1316]: I1026 15:12:44.298992    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-535130" podStartSLOduration=1.2989698889999999 podStartE2EDuration="1.298969889s" podCreationTimestamp="2025-10-26 15:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:12:44.279371741 +0000 UTC m=+1.202777926" watchObservedRunningTime="2025-10-26 15:12:44.298969889 +0000 UTC m=+1.222376138"
	Oct 26 15:12:47 embed-certs-535130 kubelet[1316]: I1026 15:12:47.727137    1316 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 15:12:47 embed-certs-535130 kubelet[1316]: I1026 15:12:47.727915    1316 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 15:12:48 embed-certs-535130 kubelet[1316]: I1026 15:12:48.790003    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/526c1bc2-396a-4668-8248-d95483175948-lib-modules\") pod \"kindnet-mlqjm\" (UID: \"526c1bc2-396a-4668-8248-d95483175948\") " pod="kube-system/kindnet-mlqjm"
	Oct 26 15:12:48 embed-certs-535130 kubelet[1316]: I1026 15:12:48.790044    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6afa7745-4329-4477-9744-1aa5b789adc6-kube-proxy\") pod \"kube-proxy-nbr2d\" (UID: \"6afa7745-4329-4477-9744-1aa5b789adc6\") " pod="kube-system/kube-proxy-nbr2d"
	Oct 26 15:12:48 embed-certs-535130 kubelet[1316]: I1026 15:12:48.790064    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6afa7745-4329-4477-9744-1aa5b789adc6-xtables-lock\") pod \"kube-proxy-nbr2d\" (UID: \"6afa7745-4329-4477-9744-1aa5b789adc6\") " pod="kube-system/kube-proxy-nbr2d"
	Oct 26 15:12:48 embed-certs-535130 kubelet[1316]: I1026 15:12:48.790078    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6afa7745-4329-4477-9744-1aa5b789adc6-lib-modules\") pod \"kube-proxy-nbr2d\" (UID: \"6afa7745-4329-4477-9744-1aa5b789adc6\") " pod="kube-system/kube-proxy-nbr2d"
	Oct 26 15:12:48 embed-certs-535130 kubelet[1316]: I1026 15:12:48.790094    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xcp8\" (UniqueName: \"kubernetes.io/projected/6afa7745-4329-4477-9744-1aa5b789adc6-kube-api-access-4xcp8\") pod \"kube-proxy-nbr2d\" (UID: \"6afa7745-4329-4477-9744-1aa5b789adc6\") " pod="kube-system/kube-proxy-nbr2d"
	Oct 26 15:12:48 embed-certs-535130 kubelet[1316]: I1026 15:12:48.790121    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/526c1bc2-396a-4668-8248-d95483175948-xtables-lock\") pod \"kindnet-mlqjm\" (UID: \"526c1bc2-396a-4668-8248-d95483175948\") " pod="kube-system/kindnet-mlqjm"
	Oct 26 15:12:48 embed-certs-535130 kubelet[1316]: I1026 15:12:48.790146    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srppc\" (UniqueName: \"kubernetes.io/projected/526c1bc2-396a-4668-8248-d95483175948-kube-api-access-srppc\") pod \"kindnet-mlqjm\" (UID: \"526c1bc2-396a-4668-8248-d95483175948\") " pod="kube-system/kindnet-mlqjm"
	Oct 26 15:12:48 embed-certs-535130 kubelet[1316]: I1026 15:12:48.790234    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/526c1bc2-396a-4668-8248-d95483175948-cni-cfg\") pod \"kindnet-mlqjm\" (UID: \"526c1bc2-396a-4668-8248-d95483175948\") " pod="kube-system/kindnet-mlqjm"
	Oct 26 15:12:49 embed-certs-535130 kubelet[1316]: I1026 15:12:49.257406    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mlqjm" podStartSLOduration=1.257380918 podStartE2EDuration="1.257380918s" podCreationTimestamp="2025-10-26 15:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:12:49.243058226 +0000 UTC m=+6.166464403" watchObservedRunningTime="2025-10-26 15:12:49.257380918 +0000 UTC m=+6.180787103"
	Oct 26 15:12:49 embed-certs-535130 kubelet[1316]: I1026 15:12:49.957326    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nbr2d" podStartSLOduration=1.957300253 podStartE2EDuration="1.957300253s" podCreationTimestamp="2025-10-26 15:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:12:49.278034475 +0000 UTC m=+6.201440660" watchObservedRunningTime="2025-10-26 15:12:49.957300253 +0000 UTC m=+6.880706438"
	Oct 26 15:12:59 embed-certs-535130 kubelet[1316]: I1026 15:12:59.919946    1316 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 15:13:00 embed-certs-535130 kubelet[1316]: I1026 15:13:00.074268    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ecac2fee-1c15-4fee-9ccd-cf42d0a041c3-tmp\") pod \"storage-provisioner\" (UID: \"ecac2fee-1c15-4fee-9ccd-cf42d0a041c3\") " pod="kube-system/storage-provisioner"
	Oct 26 15:13:00 embed-certs-535130 kubelet[1316]: I1026 15:13:00.074311    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96x85\" (UniqueName: \"kubernetes.io/projected/5ed72083-0ec8-4686-be6f-962755eee655-kube-api-access-96x85\") pod \"coredns-66bc5c9577-pnbct\" (UID: \"5ed72083-0ec8-4686-be6f-962755eee655\") " pod="kube-system/coredns-66bc5c9577-pnbct"
	Oct 26 15:13:00 embed-certs-535130 kubelet[1316]: I1026 15:13:00.074346    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ed72083-0ec8-4686-be6f-962755eee655-config-volume\") pod \"coredns-66bc5c9577-pnbct\" (UID: \"5ed72083-0ec8-4686-be6f-962755eee655\") " pod="kube-system/coredns-66bc5c9577-pnbct"
	Oct 26 15:13:00 embed-certs-535130 kubelet[1316]: I1026 15:13:00.074380    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spsgs\" (UniqueName: \"kubernetes.io/projected/ecac2fee-1c15-4fee-9ccd-cf42d0a041c3-kube-api-access-spsgs\") pod \"storage-provisioner\" (UID: \"ecac2fee-1c15-4fee-9ccd-cf42d0a041c3\") " pod="kube-system/storage-provisioner"
	Oct 26 15:13:01 embed-certs-535130 kubelet[1316]: I1026 15:13:01.267196    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pnbct" podStartSLOduration=13.267146255 podStartE2EDuration="13.267146255s" podCreationTimestamp="2025-10-26 15:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:01.26714284 +0000 UTC m=+18.190549022" watchObservedRunningTime="2025-10-26 15:13:01.267146255 +0000 UTC m=+18.190552441"
	Oct 26 15:13:01 embed-certs-535130 kubelet[1316]: I1026 15:13:01.278800    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.278779506 podStartE2EDuration="12.278779506s" podCreationTimestamp="2025-10-26 15:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:01.278608879 +0000 UTC m=+18.202015056" watchObservedRunningTime="2025-10-26 15:13:01.278779506 +0000 UTC m=+18.202185683"
	Oct 26 15:13:03 embed-certs-535130 kubelet[1316]: I1026 15:13:03.194781    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wkxq\" (UniqueName: \"kubernetes.io/projected/3a83ca98-1247-4189-b60f-6902a250ac9c-kube-api-access-9wkxq\") pod \"busybox\" (UID: \"3a83ca98-1247-4189-b60f-6902a250ac9c\") " pod="default/busybox"
	
	
	==> storage-provisioner [64d56a171688995b39bace2c8b91646ec0d09fed73fb3e01e59182eb277064a4] <==
	I1026 15:13:00.359558       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:13:00.370519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:13:00.370654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:13:00.373095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:00.380638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:13:00.380842       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:13:00.381083       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-535130_d82a9567-fd9e-4938-a925-a5c73ce7836f!
	I1026 15:13:00.381398       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f6cf1c0-4446-438f-959a-2cf0430f7cb8", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-535130_d82a9567-fd9e-4938-a925-a5c73ce7836f became leader
	W1026 15:13:00.383906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:00.392195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:13:00.482354       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-535130_d82a9567-fd9e-4938-a925-a5c73ce7836f!
	W1026 15:13:02.396851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:02.401316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:04.405479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:04.411027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:06.414898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:06.418852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:08.421753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:08.432752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:10.436494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:10.466004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:12.472999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:12.482270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-535130 -n embed-certs-535130
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-535130 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-450976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-450976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (286.294618ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-450976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-450976
helpers_test.go:243: (dbg) docker inspect newest-cni-450976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916",
	        "Created": "2025-10-26T15:12:59.003317793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1104800,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:12:59.038569936Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/hostname",
	        "HostsPath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/hosts",
	        "LogPath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916-json.log",
	        "Name": "/newest-cni-450976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-450976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-450976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916",
	                "LowerDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-450976",
	                "Source": "/var/lib/docker/volumes/newest-cni-450976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-450976",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-450976",
	                "name.minikube.sigs.k8s.io": "newest-cni-450976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "737d6b51eed275e8b4650e74bfd3549abcaf5be81b3290f7bd40aec69ec9c779",
	            "SandboxKey": "/var/run/docker/netns/737d6b51eed2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33852"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33853"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33854"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33855"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-450976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:b1:7c:78:53:6f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4254446822c371d2067f0edad3ee1d5a391333ca11c0b013055abf6c85fb5682",
	                    "EndpointID": "6dd5c1318052ab8dc762dbab19487e61b1fd05b690a47baa695df5186df751ed",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-450976",
	                        "780b6ec8823b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-450976 -n newest-cni-450976
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-450976 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-450976 logs -n 25: (1.001464496s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-330914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ addons  │ enable dashboard -p no-preload-475081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:11 UTC │
	│ start   │ -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:11 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p kubernetes-upgrade-176599                                                                                                                                                                                                                  │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ image   │ old-k8s-version-330914 image list --format=json                                                                                                                                                                                               │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p old-k8s-version-330914 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ image   │ no-preload-475081 image list --format=json                                                                                                                                                                                                    │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p no-preload-475081 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p disable-driver-mounts-619402                                                                                                                                                                                                               │ disable-driver-mounts-619402 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p cert-expiration-619245 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ delete  │ -p cert-expiration-619245                                                                                                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p auto-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-498531                  │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-535130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p embed-certs-535130 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-450976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:13:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:13:06.563756 1107827 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:13:06.564037 1107827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:06.564048 1107827 out.go:374] Setting ErrFile to fd 2...
	I1026 15:13:06.564052 1107827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:06.564280 1107827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:13:06.564811 1107827 out.go:368] Setting JSON to false
	I1026 15:13:06.566075 1107827 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10535,"bootTime":1761481052,"procs":365,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:13:06.566200 1107827 start.go:141] virtualization: kvm guest
	I1026 15:13:06.568289 1107827 out.go:179] * [auto-498531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:13:06.570053 1107827 notify.go:220] Checking for updates...
	I1026 15:13:06.570069 1107827 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:13:06.571369 1107827 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:13:06.572627 1107827 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:06.573903 1107827 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:13:06.578386 1107827 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:13:06.579639 1107827 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:13:06.581141 1107827 config.go:182] Loaded profile config "default-k8s-diff-port-790012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:06.581264 1107827 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:06.581415 1107827 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:06.581572 1107827 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:13:06.607576 1107827 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:13:06.607741 1107827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:06.671748 1107827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:13:06.660306344 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:06.671864 1107827 docker.go:318] overlay module found
	I1026 15:13:06.673623 1107827 out.go:179] * Using the docker driver based on user configuration
	I1026 15:13:06.675496 1107827 start.go:305] selected driver: docker
	I1026 15:13:06.675515 1107827 start.go:925] validating driver "docker" against <nil>
	I1026 15:13:06.675528 1107827 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:13:06.676157 1107827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:06.738503 1107827 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:13:06.728252144 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:06.738684 1107827 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:13:06.738906 1107827 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:06.740536 1107827 out.go:179] * Using Docker driver with root privileges
	I1026 15:13:06.741758 1107827 cni.go:84] Creating CNI manager for ""
	I1026 15:13:06.741820 1107827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:06.741831 1107827 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 15:13:06.741891 1107827 start.go:349] cluster config:
	{Name:auto-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1026 15:13:06.743088 1107827 out.go:179] * Starting "auto-498531" primary control-plane node in "auto-498531" cluster
	I1026 15:13:06.744385 1107827 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:13:06.745802 1107827 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:13:06.746991 1107827 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:06.747041 1107827 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:13:06.747061 1107827 cache.go:58] Caching tarball of preloaded images
	I1026 15:13:06.747096 1107827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:13:06.747176 1107827 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:13:06.747193 1107827 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:13:06.747301 1107827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/config.json ...
	I1026 15:13:06.747328 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/config.json: {Name:mk51395bb2b43f058ea11f2c355376c86dda34ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:06.769312 1107827 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:13:06.769337 1107827 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:13:06.769355 1107827 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:13:06.769385 1107827 start.go:360] acquireMachinesLock for auto-498531: {Name:mk2fc728ab6ac55049fdc8daa1ba88be08fec125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:13:06.769488 1107827 start.go:364] duration metric: took 81.976µs to acquireMachinesLock for "auto-498531"
	I1026 15:13:06.769517 1107827 start.go:93] Provisioning new machine with config: &{Name:auto-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-498531 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:06.769600 1107827 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:13:04.383076 1103368 out.go:252]   - Generating certificates and keys ...
	I1026 15:13:04.383156 1103368 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:13:04.383247 1103368 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:13:04.592061 1103368 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:13:04.690538 1103368 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:13:04.938410 1103368 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:13:05.119895 1103368 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:13:05.575678 1103368 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:13:05.575873 1103368 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-450976] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1026 15:13:05.633558 1103368 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:13:05.633792 1103368 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-450976] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1026 15:13:06.189292 1103368 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:13:06.503235 1103368 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:13:06.586480 1103368 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:13:06.586574 1103368 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:13:06.885001 1103368 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:13:07.150062 1103368 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:13:07.572037 1103368 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:13:07.627755 1103368 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:13:07.977735 1103368 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:13:07.978469 1103368 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:13:07.986445 1103368 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:13:07.988147 1103368 out.go:252]   - Booting up control plane ...
	I1026 15:13:07.988300 1103368 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:13:07.988414 1103368 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:13:07.989323 1103368 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:13:08.012783 1103368 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:13:08.012935 1103368 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:13:08.022381 1103368 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:13:08.023018 1103368 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:13:08.023095 1103368 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:13:08.165977 1103368 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:13:08.166258 1103368 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:13:06.772446 1107827 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:13:06.772651 1107827 start.go:159] libmachine.API.Create for "auto-498531" (driver="docker")
	I1026 15:13:06.772678 1107827 client.go:168] LocalClient.Create starting
	I1026 15:13:06.772772 1107827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:13:06.772839 1107827 main.go:141] libmachine: Decoding PEM data...
	I1026 15:13:06.772863 1107827 main.go:141] libmachine: Parsing certificate...
	I1026 15:13:06.772943 1107827 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:13:06.772969 1107827 main.go:141] libmachine: Decoding PEM data...
	I1026 15:13:06.772986 1107827 main.go:141] libmachine: Parsing certificate...
	I1026 15:13:06.773367 1107827 cli_runner.go:164] Run: docker network inspect auto-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:13:06.794081 1107827 cli_runner.go:211] docker network inspect auto-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:13:06.794156 1107827 network_create.go:284] running [docker network inspect auto-498531] to gather additional debugging logs...
	I1026 15:13:06.794194 1107827 cli_runner.go:164] Run: docker network inspect auto-498531
	W1026 15:13:06.817612 1107827 cli_runner.go:211] docker network inspect auto-498531 returned with exit code 1
	I1026 15:13:06.817656 1107827 network_create.go:287] error running [docker network inspect auto-498531]: docker network inspect auto-498531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-498531 not found
	I1026 15:13:06.817675 1107827 network_create.go:289] output of [docker network inspect auto-498531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-498531 not found
	
	** /stderr **
	I1026 15:13:06.817796 1107827 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:06.838553 1107827 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:13:06.839352 1107827 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:13:06.840103 1107827 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:13:06.840681 1107827 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c696734ed668 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:9a:3a:13:85:1e} reservation:<nil>}
	I1026 15:13:06.841441 1107827 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-eb8db690bfd7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:80:70:9a:55:40} reservation:<nil>}
	I1026 15:13:06.842536 1107827 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8f030}
	I1026 15:13:06.842558 1107827 network_create.go:124] attempt to create docker network auto-498531 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1026 15:13:06.842600 1107827 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-498531 auto-498531
	I1026 15:13:06.903641 1107827 network_create.go:108] docker network auto-498531 192.168.94.0/24 created
	I1026 15:13:06.903672 1107827 kic.go:121] calculated static IP "192.168.94.2" for the "auto-498531" container
	I1026 15:13:06.903752 1107827 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:13:06.922096 1107827 cli_runner.go:164] Run: docker volume create auto-498531 --label name.minikube.sigs.k8s.io=auto-498531 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:13:06.940455 1107827 oci.go:103] Successfully created a docker volume auto-498531
	I1026 15:13:06.940553 1107827 cli_runner.go:164] Run: docker run --rm --name auto-498531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-498531 --entrypoint /usr/bin/test -v auto-498531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:13:07.355323 1107827 oci.go:107] Successfully prepared a docker volume auto-498531
	I1026 15:13:07.355376 1107827 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:07.355416 1107827 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:13:07.355476 1107827 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 15:13:09.667208 1103368 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501211393s
	I1026 15:13:09.671749 1103368 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:13:09.671905 1103368 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1026 15:13:09.672039 1103368 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:13:09.672179 1103368 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:13:13.417660 1103368 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.74585927s
	I1026 15:13:14.290372 1103368 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.618659621s
	I1026 15:13:14.554070 1100384 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:13:14.554304 1100384 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:13:14.554422 1100384 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:13:14.554503 1100384 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:13:14.554549 1100384 kubeadm.go:318] OS: Linux
	I1026 15:13:14.554620 1100384 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:13:14.554699 1100384 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:13:14.554770 1100384 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:13:14.554851 1100384 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:13:14.554922 1100384 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:13:14.554998 1100384 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:13:14.555071 1100384 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:13:14.555143 1100384 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:13:14.555262 1100384 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:13:14.555420 1100384 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:13:14.555553 1100384 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:13:14.555649 1100384 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:13:14.557464 1100384 out.go:252]   - Generating certificates and keys ...
	I1026 15:13:14.557561 1100384 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:13:14.557667 1100384 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:13:14.557760 1100384 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:13:14.557861 1100384 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:13:14.557955 1100384 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:13:14.558035 1100384 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:13:14.558093 1100384 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:13:14.558264 1100384 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-790012 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:13:14.558353 1100384 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:13:14.558544 1100384 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-790012 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 15:13:14.558627 1100384 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:13:14.558705 1100384 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:13:14.558761 1100384 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:13:14.558837 1100384 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:13:14.558900 1100384 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:13:14.558980 1100384 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:13:14.559039 1100384 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:13:14.559147 1100384 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:13:14.559301 1100384 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:13:14.559455 1100384 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:13:14.559566 1100384 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:13:14.560918 1100384 out.go:252]   - Booting up control plane ...
	I1026 15:13:14.561049 1100384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:13:14.561194 1100384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:13:14.561285 1100384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:13:14.561406 1100384 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:13:14.561540 1100384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:13:14.561629 1100384 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:13:14.561731 1100384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:13:14.561800 1100384 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:13:14.561990 1100384 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:13:14.562098 1100384 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:13:14.562193 1100384 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001357847s
	I1026 15:13:14.562329 1100384 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:13:14.562400 1100384 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1026 15:13:14.562526 1100384 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:13:14.562595 1100384 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:13:14.562672 1100384 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.316939981s
	I1026 15:13:14.562778 1100384 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.440197376s
	I1026 15:13:14.562877 1100384 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002287398s
	I1026 15:13:14.563021 1100384 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:13:14.563203 1100384 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:13:14.563283 1100384 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:13:14.563567 1100384 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-790012 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:13:14.563648 1100384 kubeadm.go:318] [bootstrap-token] Using token: o9h02e.bgvupcsvyc71vxds
	I1026 15:13:14.566228 1100384 out.go:252]   - Configuring RBAC rules ...
	I1026 15:13:14.566395 1100384 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:13:14.566523 1100384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:13:14.566725 1100384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:13:14.566877 1100384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:13:14.567007 1100384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:13:14.567111 1100384 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:13:14.567285 1100384 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:13:14.567373 1100384 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:13:14.567454 1100384 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:13:14.567469 1100384 kubeadm.go:318] 
	I1026 15:13:14.567572 1100384 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:13:14.567593 1100384 kubeadm.go:318] 
	I1026 15:13:14.567720 1100384 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:13:14.567738 1100384 kubeadm.go:318] 
	I1026 15:13:14.567773 1100384 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:13:14.567866 1100384 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:13:14.567940 1100384 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:13:14.567949 1100384 kubeadm.go:318] 
	I1026 15:13:14.568029 1100384 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:13:14.568036 1100384 kubeadm.go:318] 
	I1026 15:13:14.568099 1100384 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:13:14.568108 1100384 kubeadm.go:318] 
	I1026 15:13:14.568201 1100384 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:13:14.568312 1100384 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:13:14.568408 1100384 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:13:14.568418 1100384 kubeadm.go:318] 
	I1026 15:13:14.568553 1100384 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:13:14.568688 1100384 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:13:14.568704 1100384 kubeadm.go:318] 
	I1026 15:13:14.568821 1100384 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token o9h02e.bgvupcsvyc71vxds \
	I1026 15:13:14.568962 1100384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 15:13:14.568998 1100384 kubeadm.go:318] 	--control-plane 
	I1026 15:13:14.569012 1100384 kubeadm.go:318] 
	I1026 15:13:14.569121 1100384 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:13:14.569133 1100384 kubeadm.go:318] 
	I1026 15:13:14.569257 1100384 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token o9h02e.bgvupcsvyc71vxds \
	I1026 15:13:14.569406 1100384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 15:13:14.569420 1100384 cni.go:84] Creating CNI manager for ""
	I1026 15:13:14.569430 1100384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:14.571070 1100384 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:13:16.173492 1103368 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501718654s
	I1026 15:13:16.186828 1103368 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:13:16.202016 1103368 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:13:16.213499 1103368 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:13:16.213735 1103368 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-450976 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:13:16.224432 1103368 kubeadm.go:318] [bootstrap-token] Using token: w2sevx.mfsru7n7sv3rf84k
	I1026 15:13:11.918345 1107827 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.562788s)
	I1026 15:13:11.918385 1107827 kic.go:203] duration metric: took 4.562978404s to extract preloaded images to volume ...
	W1026 15:13:11.918487 1107827 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:13:11.918527 1107827 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:13:11.918579 1107827 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:13:12.013515 1107827 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-498531 --name auto-498531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-498531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-498531 --network auto-498531 --ip 192.168.94.2 --volume auto-498531:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:13:12.428173 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Running}}
	I1026 15:13:12.452204 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:12.477014 1107827 cli_runner.go:164] Run: docker exec auto-498531 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:13:12.548377 1107827 oci.go:144] the created container "auto-498531" has a running status.
	I1026 15:13:12.548434 1107827 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa...
	I1026 15:13:13.058590 1107827 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:13:13.095491 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:13.120158 1107827 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:13:13.120194 1107827 kic_runner.go:114] Args: [docker exec --privileged auto-498531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:13:13.189088 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:13.216158 1107827 machine.go:93] provisionDockerMachine start ...
	I1026 15:13:13.216282 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:13.250033 1107827 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:13.250408 1107827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1026 15:13:13.250426 1107827 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:13:13.416549 1107827 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-498531
	
	I1026 15:13:13.416594 1107827 ubuntu.go:182] provisioning hostname "auto-498531"
	I1026 15:13:13.416677 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:13.441590 1107827 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:13.441878 1107827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1026 15:13:13.441900 1107827 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-498531 && echo "auto-498531" | sudo tee /etc/hostname
	I1026 15:13:13.620596 1107827 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-498531
	
	I1026 15:13:13.620680 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:13.646597 1107827 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:13.646846 1107827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1026 15:13:13.646874 1107827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-498531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-498531/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-498531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:13:13.806850 1107827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:13:13.806886 1107827 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:13:13.807200 1107827 ubuntu.go:190] setting up certificates
	I1026 15:13:13.807218 1107827 provision.go:84] configureAuth start
	I1026 15:13:13.807287 1107827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-498531
	I1026 15:13:13.835188 1107827 provision.go:143] copyHostCerts
	I1026 15:13:13.835265 1107827 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:13:13.835277 1107827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:13:13.835619 1107827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:13:13.835882 1107827 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:13:13.835941 1107827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:13:13.835996 1107827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:13:13.836197 1107827 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:13:13.836313 1107827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:13:13.836364 1107827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:13:13.836451 1107827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.auto-498531 san=[127.0.0.1 192.168.94.2 auto-498531 localhost minikube]
	I1026 15:13:14.060922 1107827 provision.go:177] copyRemoteCerts
	I1026 15:13:14.061014 1107827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:13:14.061071 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:14.089087 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:14.206387 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:13:14.233840 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1026 15:13:14.269689 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:13:14.296536 1107827 provision.go:87] duration metric: took 489.297195ms to configureAuth
	I1026 15:13:14.296571 1107827 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:13:14.296755 1107827 config.go:182] Loaded profile config "auto-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:14.296861 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:14.317790 1107827 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:14.318320 1107827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1026 15:13:14.318357 1107827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:13:14.621603 1107827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:13:14.621684 1107827 machine.go:96] duration metric: took 1.405479183s to provisionDockerMachine
	I1026 15:13:14.621703 1107827 client.go:171] duration metric: took 7.849017388s to LocalClient.Create
	I1026 15:13:14.621720 1107827 start.go:167] duration metric: took 7.849069825s to libmachine.API.Create "auto-498531"
	I1026 15:13:14.621729 1107827 start.go:293] postStartSetup for "auto-498531" (driver="docker")
	I1026 15:13:14.621742 1107827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:13:14.621824 1107827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:13:14.621874 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:14.644690 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:14.752906 1107827 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:13:14.756788 1107827 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:13:14.756830 1107827 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:13:14.756842 1107827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:13:14.756901 1107827 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:13:14.757010 1107827 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:13:14.757135 1107827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:13:14.765261 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:14.787343 1107827 start.go:296] duration metric: took 165.59696ms for postStartSetup
	I1026 15:13:14.787758 1107827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-498531
	I1026 15:13:14.810510 1107827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/config.json ...
	I1026 15:13:14.810898 1107827 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:13:14.810952 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:14.841783 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:14.952935 1107827 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:13:14.959023 1107827 start.go:128] duration metric: took 8.189405681s to createHost
	I1026 15:13:14.959050 1107827 start.go:83] releasing machines lock for "auto-498531", held for 8.189549576s
	I1026 15:13:14.959123 1107827 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-498531
	I1026 15:13:14.990397 1107827 ssh_runner.go:195] Run: cat /version.json
	I1026 15:13:14.990469 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:14.990475 1107827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:13:14.990542 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:15.010721 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:15.012263 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:15.169273 1107827 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:15.176406 1107827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:13:15.213137 1107827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:13:15.218010 1107827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:13:15.218066 1107827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:13:15.249266 1107827 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:13:15.249293 1107827 start.go:495] detecting cgroup driver to use...
	I1026 15:13:15.249330 1107827 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:13:15.249396 1107827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:13:15.270806 1107827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:13:15.286802 1107827 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:13:15.286863 1107827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:13:15.307811 1107827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:13:15.330272 1107827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:13:15.433721 1107827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:13:15.553351 1107827 docker.go:234] disabling docker service ...
	I1026 15:13:15.553421 1107827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:13:15.577824 1107827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:13:15.593917 1107827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:13:15.697220 1107827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:13:15.803058 1107827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:13:15.821491 1107827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:13:15.841625 1107827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:13:15.841709 1107827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:15.854897 1107827 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:13:15.854970 1107827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:15.867354 1107827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:15.878456 1107827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:15.891926 1107827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:13:15.901724 1107827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:15.911440 1107827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:15.927441 1107827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:15.939015 1107827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:13:15.948931 1107827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:13:15.958679 1107827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:16.061134 1107827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:16.177646 1107827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:16.177715 1107827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:16.182914 1107827 start.go:563] Will wait 60s for crictl version
	I1026 15:13:16.182979 1107827 ssh_runner.go:195] Run: which crictl
	I1026 15:13:16.187956 1107827 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:13:16.218262 1107827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:13:16.218353 1107827 ssh_runner.go:195] Run: crio --version
	I1026 15:13:16.255545 1107827 ssh_runner.go:195] Run: crio --version
	I1026 15:13:16.292880 1107827 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:13:16.294042 1107827 cli_runner.go:164] Run: docker network inspect auto-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:16.314589 1107827 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:16.319022 1107827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:16.330153 1107827 kubeadm.go:883] updating cluster {Name:auto-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:16.330321 1107827 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:16.330387 1107827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:16.365932 1107827 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:16.365954 1107827 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:16.366001 1107827 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:16.394039 1107827 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:16.394063 1107827 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:16.394071 1107827 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1026 15:13:16.394178 1107827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-498531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:16.394263 1107827 ssh_runner.go:195] Run: crio config
	I1026 15:13:16.440777 1107827 cni.go:84] Creating CNI manager for ""
	I1026 15:13:16.440803 1107827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:16.440820 1107827 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:13:16.440843 1107827 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-498531 NodeName:auto-498531 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:16.440962 1107827 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-498531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:16.441046 1107827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:13:16.449981 1107827 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:16.450052 1107827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:16.459096 1107827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1026 15:13:16.473841 1107827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:16.491669 1107827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1026 15:13:16.505628 1107827 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:16.510121 1107827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:16.521625 1107827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:14.572296 1100384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:13:14.577318 1100384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:13:14.577342 1100384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:13:14.593862 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:13:14.873839 1100384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:13:14.873933 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:14.874018 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-790012 minikube.k8s.io/updated_at=2025_10_26T15_13_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=default-k8s-diff-port-790012 minikube.k8s.io/primary=true
	I1026 15:13:14.956334 1100384 ops.go:34] apiserver oom_adj: -16
	I1026 15:13:14.956369 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:15.456916 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:15.956837 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:16.456532 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:16.225708 1103368 out.go:252]   - Configuring RBAC rules ...
	I1026 15:13:16.225848 1103368 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:13:16.230848 1103368 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:13:16.237473 1103368 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:13:16.240267 1103368 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:13:16.243953 1103368 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:13:16.246802 1103368 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:13:16.580571 1103368 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:13:17.000491 1103368 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:13:17.580748 1103368 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:13:17.582029 1103368 kubeadm.go:318] 
	I1026 15:13:17.582125 1103368 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:13:17.582133 1103368 kubeadm.go:318] 
	I1026 15:13:17.582286 1103368 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:13:17.582300 1103368 kubeadm.go:318] 
	I1026 15:13:17.582334 1103368 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:13:17.582449 1103368 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:13:17.582544 1103368 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:13:17.582560 1103368 kubeadm.go:318] 
	I1026 15:13:17.582641 1103368 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:13:17.582650 1103368 kubeadm.go:318] 
	I1026 15:13:17.582709 1103368 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:13:17.582718 1103368 kubeadm.go:318] 
	I1026 15:13:17.582807 1103368 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:13:17.582915 1103368 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:13:17.583032 1103368 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:13:17.583045 1103368 kubeadm.go:318] 
	I1026 15:13:17.583189 1103368 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:13:17.583295 1103368 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:13:17.583306 1103368 kubeadm.go:318] 
	I1026 15:13:17.583428 1103368 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token w2sevx.mfsru7n7sv3rf84k \
	I1026 15:13:17.583580 1103368 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b \
	I1026 15:13:17.583613 1103368 kubeadm.go:318] 	--control-plane 
	I1026 15:13:17.583623 1103368 kubeadm.go:318] 
	I1026 15:13:17.583736 1103368 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:13:17.583743 1103368 kubeadm.go:318] 
	I1026 15:13:17.583870 1103368 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token w2sevx.mfsru7n7sv3rf84k \
	I1026 15:13:17.584001 1103368 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:17405a11f9ced5253329d88582717a258ab19676719f7fb1d52a2fb8fc3ffa0b 
	I1026 15:13:17.586938 1103368 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 15:13:17.587102 1103368 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:13:17.587129 1103368 cni.go:84] Creating CNI manager for ""
	I1026 15:13:17.587146 1103368 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:17.589441 1103368 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 15:13:17.590531 1103368 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 15:13:17.594867 1103368 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:13:17.594887 1103368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 15:13:17.609063 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:13:17.850495 1103368 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:13:17.850598 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:17.850633 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-450976 minikube.k8s.io/updated_at=2025_10_26T15_13_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=newest-cni-450976 minikube.k8s.io/primary=true
	I1026 15:13:17.861413 1103368 ops.go:34] apiserver oom_adj: -16
	I1026 15:13:17.943628 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:18.444516 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:18.944639 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:16.956638 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:17.456886 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:17.956434 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:18.456746 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:18.957080 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:19.457053 1100384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:19.534904 1100384 kubeadm.go:1113] duration metric: took 4.661046383s to wait for elevateKubeSystemPrivileges
	I1026 15:13:19.534949 1100384 kubeadm.go:402] duration metric: took 18.93443356s to StartCluster
	I1026 15:13:19.534974 1100384 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:19.535062 1100384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:19.537037 1100384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:19.537344 1100384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:13:19.537365 1100384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:19.537408 1100384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:19.537507 1100384 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-790012"
	I1026 15:13:19.537532 1100384 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-790012"
	I1026 15:13:19.537533 1100384 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-790012"
	I1026 15:13:19.537558 1100384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-790012"
	I1026 15:13:19.537562 1100384 host.go:66] Checking if "default-k8s-diff-port-790012" exists ...
	I1026 15:13:19.537602 1100384 config.go:182] Loaded profile config "default-k8s-diff-port-790012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:19.537936 1100384 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-790012 --format={{.State.Status}}
	I1026 15:13:19.538035 1100384 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-790012 --format={{.State.Status}}
	I1026 15:13:19.539867 1100384 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:19.541841 1100384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:19.566984 1100384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:19.568102 1100384 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-790012"
	I1026 15:13:19.568146 1100384 host.go:66] Checking if "default-k8s-diff-port-790012" exists ...
	I1026 15:13:19.568287 1100384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:19.568308 1100384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:19.568370 1100384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-790012
	I1026 15:13:19.568641 1100384 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-790012 --format={{.State.Status}}
	I1026 15:13:19.606448 1100384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:19.606480 1100384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:19.606556 1100384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-790012
	I1026 15:13:19.607733 1100384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/default-k8s-diff-port-790012/id_rsa Username:docker}
	I1026 15:13:19.636149 1100384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33847 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/default-k8s-diff-port-790012/id_rsa Username:docker}
	I1026 15:13:19.664224 1100384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:13:19.715733 1100384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:19.740635 1100384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:19.771567 1100384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:19.859647 1100384 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 15:13:19.861381 1100384 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-790012" to be "Ready" ...
	I1026 15:13:20.084610 1100384 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:13:16.610778 1107827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:16.636847 1107827 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531 for IP: 192.168.94.2
	I1026 15:13:16.636868 1107827 certs.go:195] generating shared ca certs ...
	I1026 15:13:16.636884 1107827 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:16.637073 1107827 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:13:16.637142 1107827 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:13:16.637157 1107827 certs.go:257] generating profile certs ...
	I1026 15:13:16.637246 1107827 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/client.key
	I1026 15:13:16.637271 1107827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/client.crt with IP's: []
	I1026 15:13:17.199574 1107827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/client.crt ...
	I1026 15:13:17.199606 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/client.crt: {Name:mk514744fb397aa93353c61ace0949ffc508f4a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:17.199786 1107827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/client.key ...
	I1026 15:13:17.199798 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/client.key: {Name:mk3257ce8e7c2815165d31f2e76e6a271dc8f974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:17.199877 1107827 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.key.32fa66b3
	I1026 15:13:17.199893 1107827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.crt.32fa66b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1026 15:13:17.392741 1107827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.crt.32fa66b3 ...
	I1026 15:13:17.392769 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.crt.32fa66b3: {Name:mk213ccf3abee6e4868a369243b5dd8c2cf9d926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:17.392931 1107827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.key.32fa66b3 ...
	I1026 15:13:17.392945 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.key.32fa66b3: {Name:mk8925f5c1222c9004f71e79dd5e1de53ca63f0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:17.393029 1107827 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.crt.32fa66b3 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.crt
	I1026 15:13:17.393104 1107827 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.key.32fa66b3 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.key
	I1026 15:13:17.393177 1107827 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/proxy-client.key
	I1026 15:13:17.393193 1107827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/proxy-client.crt with IP's: []
	I1026 15:13:17.928056 1107827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/proxy-client.crt ...
	I1026 15:13:17.928093 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/proxy-client.crt: {Name:mkff3d0e7c2205da81869b6cf55ffde4967d3471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:17.928337 1107827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/proxy-client.key ...
	I1026 15:13:17.928401 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/proxy-client.key: {Name:mk3b5b33586d008cc9e88b433e791ba5be5e6fd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:17.928678 1107827 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:13:17.928722 1107827 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:17.928732 1107827 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:13:17.928762 1107827 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:17.928791 1107827 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:17.928823 1107827 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:17.928877 1107827 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:17.929691 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:17.955801 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:13:17.980513 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:18.004299 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:13:18.026454 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1026 15:13:18.046221 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:13:18.065470 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:18.084976 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/auto-498531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:13:18.105908 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:18.128405 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:13:18.147578 1107827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:13:18.166109 1107827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:18.179926 1107827 ssh_runner.go:195] Run: openssl version
	I1026 15:13:18.187148 1107827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:18.196471 1107827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:18.201578 1107827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:18.201640 1107827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:18.238896 1107827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:18.248444 1107827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:13:18.258056 1107827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:13:18.262290 1107827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:13:18.262367 1107827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:13:18.297526 1107827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:18.307912 1107827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:13:18.317252 1107827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:13:18.321314 1107827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:13:18.321366 1107827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:13:18.358332 1107827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:18.367637 1107827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:18.371726 1107827 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:13:18.371800 1107827 kubeadm.go:400] StartCluster: {Name:auto-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:18.371901 1107827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:18.371991 1107827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:18.400789 1107827 cri.go:89] found id: ""
	I1026 15:13:18.400862 1107827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:13:18.409868 1107827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:13:18.418262 1107827 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:13:18.418327 1107827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:13:18.426857 1107827 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:13:18.426880 1107827 kubeadm.go:157] found existing configuration files:
	
	I1026 15:13:18.426933 1107827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:13:18.436190 1107827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:13:18.436258 1107827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:13:18.447137 1107827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:13:18.459979 1107827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:13:18.460048 1107827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:13:18.472231 1107827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:13:18.484344 1107827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:13:18.484414 1107827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:13:18.494403 1107827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:13:18.505178 1107827 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:13:18.505247 1107827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:13:18.516456 1107827 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:13:18.584959 1107827 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 15:13:18.649592 1107827 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:13:20.085905 1100384 addons.go:514] duration metric: took 548.495862ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:13:20.365460 1100384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-790012" context rescaled to 1 replicas
	W1026 15:13:21.865611 1100384 node_ready.go:57] node "default-k8s-diff-port-790012" has "Ready":"False" status (will retry)
	I1026 15:13:19.443968 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:19.943983 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:20.444427 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:20.943783 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:21.444294 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:21.944410 1103368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:22.016657 1103368 kubeadm.go:1113] duration metric: took 4.166119407s to wait for elevateKubeSystemPrivileges
	I1026 15:13:22.016701 1103368 kubeadm.go:402] duration metric: took 17.941007852s to StartCluster
	I1026 15:13:22.016726 1103368 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:22.016830 1103368 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:22.018458 1103368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:22.018699 1103368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:13:22.018713 1103368 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:22.018791 1103368 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:22.018899 1103368 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-450976"
	I1026 15:13:22.018912 1103368 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:22.018922 1103368 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-450976"
	I1026 15:13:22.018961 1103368 host.go:66] Checking if "newest-cni-450976" exists ...
	I1026 15:13:22.018959 1103368 addons.go:69] Setting default-storageclass=true in profile "newest-cni-450976"
	I1026 15:13:22.019008 1103368 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-450976"
	I1026 15:13:22.019467 1103368 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:22.019551 1103368 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:22.020282 1103368 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:22.021732 1103368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:22.043465 1103368 addons.go:238] Setting addon default-storageclass=true in "newest-cni-450976"
	I1026 15:13:22.043508 1103368 host.go:66] Checking if "newest-cni-450976" exists ...
	I1026 15:13:22.043985 1103368 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:22.047700 1103368 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:22.049541 1103368 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:22.049563 1103368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:22.049629 1103368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:22.074479 1103368 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:22.074515 1103368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:22.074576 1103368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:22.083598 1103368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33852 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:22.100979 1103368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33852 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:22.122885 1103368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:13:22.175623 1103368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:22.206800 1103368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:22.219416 1103368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:22.306280 1103368 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1026 15:13:22.307967 1103368 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:13:22.308025 1103368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:13:22.537888 1103368 api_server.go:72] duration metric: took 519.13942ms to wait for apiserver process to appear ...
	I1026 15:13:22.537915 1103368 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:13:22.537939 1103368 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:13:22.543356 1103368 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 15:13:22.544242 1103368 api_server.go:141] control plane version: v1.34.1
	I1026 15:13:22.544272 1103368 api_server.go:131] duration metric: took 6.348937ms to wait for apiserver health ...
	I1026 15:13:22.544291 1103368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:13:22.544670 1103368 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:13:22.545865 1103368 addons.go:514] duration metric: took 527.077743ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:13:22.547365 1103368 system_pods.go:59] 8 kube-system pods found
	I1026 15:13:22.547403 1103368 system_pods.go:61] "coredns-66bc5c9577-7jwrr" [c1acc555-e2da-4acf-ac6d-6818ea2173d5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:13:22.547422 1103368 system_pods.go:61] "etcd-newest-cni-450976" [5ee64166-247f-49ca-9212-b4c60c0152c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:13:22.547431 1103368 system_pods.go:61] "kindnet-9tqxv" [d6ade61f-e6fb-4746-9b65-ce10129cd53e] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:13:22.547443 1103368 system_pods.go:61] "kube-apiserver-newest-cni-450976" [a2aa9446-3bbe-45c4-902b-07e7773290bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:13:22.547460 1103368 system_pods.go:61] "kube-controller-manager-newest-cni-450976" [0ae3b699-6a5a-41d6-b223-9f6858f990cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:13:22.547475 1103368 system_pods.go:61] "kube-proxy-jfm7b" [6e6c6e48-eb1f-4a31-9cf4-390096851e53] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:13:22.547486 1103368 system_pods.go:61] "kube-scheduler-newest-cni-450976" [8a2965f8-8545-46fd-bcf3-cc767c87b873] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:13:22.547495 1103368 system_pods.go:61] "storage-provisioner" [7182c30a-3cfc-49ba-b2d8-ee172f0272dd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:13:22.547503 1103368 system_pods.go:74] duration metric: took 3.204588ms to wait for pod list to return data ...
	I1026 15:13:22.547525 1103368 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:13:22.549744 1103368 default_sa.go:45] found service account: "default"
	I1026 15:13:22.549770 1103368 default_sa.go:55] duration metric: took 2.230754ms for default service account to be created ...
	I1026 15:13:22.549784 1103368 kubeadm.go:586] duration metric: took 531.043388ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:13:22.549805 1103368 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:22.551879 1103368 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:13:22.551899 1103368 node_conditions.go:123] node cpu capacity is 8
	I1026 15:13:22.551912 1103368 node_conditions.go:105] duration metric: took 2.102566ms to run NodePressure ...
	I1026 15:13:22.551924 1103368 start.go:241] waiting for startup goroutines ...
	I1026 15:13:22.811958 1103368 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-450976" context rescaled to 1 replicas
	I1026 15:13:22.812009 1103368 start.go:246] waiting for cluster config update ...
	I1026 15:13:22.812025 1103368 start.go:255] writing updated cluster config ...
	I1026 15:13:22.812460 1103368 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:22.865681 1103368 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:13:22.867619 1103368 out.go:179] * Done! kubectl is now configured to use "newest-cni-450976" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.638102736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.641669873Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=00450dac-f8fe-4561-903b-d462be0a1307 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.642398994Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9ae3a1a7-0334-441a-971b-df98a0fd2ad2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.64325899Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.643714945Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.644134498Z" level=info msg="Ran pod sandbox 1649e878dbe2c3d73b6fe27cb856c4c9308dccc039849308d0d751adfaac854e with infra container: kube-system/kindnet-9tqxv/POD" id=00450dac-f8fe-4561-903b-d462be0a1307 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.644446926Z" level=info msg="Ran pod sandbox 44d5b2200d24cda4425bb40f3b5fc50ca35bfc6274ca5ae878e00c1f8d712f6c with infra container: kube-system/kube-proxy-jfm7b/POD" id=9ae3a1a7-0334-441a-971b-df98a0fd2ad2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.645522108Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7fa61eea-875c-4b4b-8fdf-5f2f1091190e name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.645534476Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1864cd38-8174-4375-b6fa-f0a54cd22830 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.646558572Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5afb5621-f68b-4e36-8c6b-25d8ccae4fb9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.646568921Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cae07321-e1b8-4e86-af42-9ef526cf0fc8 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.650484354Z" level=info msg="Creating container: kube-system/kindnet-9tqxv/kindnet-cni" id=24a1b132-8dc6-4bf5-aea1-44513b590bc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.650574131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.651621993Z" level=info msg="Creating container: kube-system/kube-proxy-jfm7b/kube-proxy" id=c92c5e6a-79f1-49f6-a451-9267cc9f9f63 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.651771002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.655917341Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.656470857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.657930296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.658506107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.682303512Z" level=info msg="Created container e9245b4ce59b999b0d263309a7e9fe357ebb9eb00ddb94277d9143c044ee5ffd: kube-system/kindnet-9tqxv/kindnet-cni" id=24a1b132-8dc6-4bf5-aea1-44513b590bc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.683046606Z" level=info msg="Starting container: e9245b4ce59b999b0d263309a7e9fe357ebb9eb00ddb94277d9143c044ee5ffd" id=f8e853e8-0263-4871-a153-12cc9a747f92 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.684288809Z" level=info msg="Created container 4b7fc6a49736123526f9ed8d2a89a7585f950381b85952c0c6746675b4fedfd5: kube-system/kube-proxy-jfm7b/kube-proxy" id=c92c5e6a-79f1-49f6-a451-9267cc9f9f63 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.684817485Z" level=info msg="Starting container: 4b7fc6a49736123526f9ed8d2a89a7585f950381b85952c0c6746675b4fedfd5" id=2f59b8d0-b8f9-4a59-9157-fbca8d598863 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.684981865Z" level=info msg="Started container" PID=1576 containerID=e9245b4ce59b999b0d263309a7e9fe357ebb9eb00ddb94277d9143c044ee5ffd description=kube-system/kindnet-9tqxv/kindnet-cni id=f8e853e8-0263-4871-a153-12cc9a747f92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1649e878dbe2c3d73b6fe27cb856c4c9308dccc039849308d0d751adfaac854e
	Oct 26 15:13:22 newest-cni-450976 crio[772]: time="2025-10-26T15:13:22.687673581Z" level=info msg="Started container" PID=1577 containerID=4b7fc6a49736123526f9ed8d2a89a7585f950381b85952c0c6746675b4fedfd5 description=kube-system/kube-proxy-jfm7b/kube-proxy id=2f59b8d0-b8f9-4a59-9157-fbca8d598863 name=/runtime.v1.RuntimeService/StartContainer sandboxID=44d5b2200d24cda4425bb40f3b5fc50ca35bfc6274ca5ae878e00c1f8d712f6c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4b7fc6a497361       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   44d5b2200d24c       kube-proxy-jfm7b                            kube-system
	e9245b4ce59b9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   1649e878dbe2c       kindnet-9tqxv                               kube-system
	aa24d799c2890       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   534b1c7304f96       etcd-newest-cni-450976                      kube-system
	28bcbb62a95ab       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   697e66b633b4c       kube-controller-manager-newest-cni-450976   kube-system
	62c3747decb36       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   fe2aedf80e1a0       kube-apiserver-newest-cni-450976            kube-system
	4358491bfffd4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   9e544d462c45f       kube-scheduler-newest-cni-450976            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-450976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-450976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=newest-cni-450976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_13_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:13:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-450976
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:13:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:13:16 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:13:16 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:13:16 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 15:13:16 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-450976
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                1575f574-b7cf-4d6a-9ab9-f0fb8538a042
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-450976                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-9tqxv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-450976             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-450976    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-jfm7b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-450976             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-450976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-450976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-450976 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-450976 event: Registered Node newest-cni-450976 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [aa24d799c289078e1d780369c9c25e5ab1d063b3d9894ad960a9bb1d11a58d6f] <==
	{"level":"warn","ts":"2025-10-26T15:13:13.438083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.449912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.459384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.471564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.486635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.496791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.505316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.513496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.520619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.529017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.537317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.545718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.554990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.563894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.571722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.580343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.590793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.606096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.613733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.623124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.632072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.648385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.656320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.663657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:13.722922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35200","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:13:24 up  2:55,  0 user,  load average: 3.44, 2.67, 1.79
	Linux newest-cni-450976 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9245b4ce59b999b0d263309a7e9fe357ebb9eb00ddb94277d9143c044ee5ffd] <==
	I1026 15:13:22.850301       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:13:22.850605       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 15:13:22.850797       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:13:22.850816       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:13:22.850843       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:13:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:13:23.145391       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:13:23.145429       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:13:23.145443       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:13:23.145591       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:13:23.446505       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:13:23.446544       1 metrics.go:72] Registering metrics
	I1026 15:13:23.446676       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [62c3747decb36997ec0d24f623aa653ac7a51053212443b09106100b9132fddd] <==
	I1026 15:13:14.326017       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:13:14.327048       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:13:14.336478       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1026 15:13:14.339334       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1026 15:13:14.351324       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:14.351345       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:13:14.351363       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:14.543666       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:13:15.229941       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:13:15.233703       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:13:15.233722       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:13:15.796093       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:13:15.841138       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:13:15.945624       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:13:15.953659       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1026 15:13:15.954996       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:13:15.960122       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:13:16.250761       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:13:16.989590       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:13:16.999423       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:13:17.007903       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:13:21.957093       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:21.962397       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:22.304212       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:13:22.352566       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [28bcbb62a95ab110403a4ebcc0f0c1f41d3740b0cf8d3a9e470672d406b92b9f] <==
	I1026 15:13:21.250665       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:13:21.250673       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:13:21.250728       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:13:21.251410       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 15:13:21.251455       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 15:13:21.251522       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 15:13:21.251545       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 15:13:21.251602       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:13:21.251699       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 15:13:21.251809       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:13:21.252064       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:13:21.252157       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 15:13:21.252340       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:13:21.252498       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:13:21.254017       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:13:21.254877       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:13:21.256113       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:13:21.256171       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:13:21.256184       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:13:21.256228       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:13:21.256241       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:13:21.256249       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:13:21.260491       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:13:21.262513       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-450976" podCIDRs=["10.42.0.0/24"]
	I1026 15:13:21.272691       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4b7fc6a49736123526f9ed8d2a89a7585f950381b85952c0c6746675b4fedfd5] <==
	I1026 15:13:22.739557       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:13:22.813947       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:13:22.914244       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:13:22.914282       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 15:13:22.914405       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:13:22.938307       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:13:22.938360       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:13:22.946477       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:13:22.947103       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:13:22.948457       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:22.950460       1 config.go:309] "Starting node config controller"
	I1026 15:13:22.950520       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:13:22.950547       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:13:22.952271       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:13:22.952414       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:13:22.952581       1 config.go:200] "Starting service config controller"
	I1026 15:13:22.952743       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:13:22.953117       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:13:22.953312       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:13:23.052711       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:13:23.053032       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:13:23.054232       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4358491bfffd4744beb429d4489efbadaf308a715b4121f42fcce7baa725ca54] <==
	E1026 15:13:14.288913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:13:14.288951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:13:14.288967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:13:14.289039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:13:14.289059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:13:14.289127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:13:14.289126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:13:14.289257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:13:14.289430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:13:14.289658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:13:15.094139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:13:15.142530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:13:15.178209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:13:15.181368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 15:13:15.186690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:13:15.187552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:13:15.223188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:13:15.229458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:13:15.237587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:13:15.262780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:13:15.321722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:13:15.446402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:13:15.477889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:13:15.569529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1026 15:13:17.783570       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.120186    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74b10d2c54d232d5fb77e8022112bf55-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-450976\" (UID: \"74b10d2c54d232d5fb77e8022112bf55\") " pod="kube-system/kube-controller-manager-newest-cni-450976"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.812830    1296 apiserver.go:52] "Watching apiserver"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.818260    1296 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.860364    1296 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.860652    1296 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-450976"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.860842    1296 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-450976"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: E1026 15:13:17.873329    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-450976\" already exists" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: E1026 15:13:17.873609    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-450976\" already exists" pod="kube-system/kube-scheduler-newest-cni-450976"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: E1026 15:13:17.873631    1296 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-450976\" already exists" pod="kube-system/etcd-newest-cni-450976"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.922711    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-450976" podStartSLOduration=1.9226880880000001 podStartE2EDuration="1.922688088s" podCreationTimestamp="2025-10-26 15:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:17.922152379 +0000 UTC m=+1.177225491" watchObservedRunningTime="2025-10-26 15:13:17.922688088 +0000 UTC m=+1.177761197"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.922879    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-450976" podStartSLOduration=1.922864291 podStartE2EDuration="1.922864291s" podCreationTimestamp="2025-10-26 15:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:17.907635903 +0000 UTC m=+1.162709007" watchObservedRunningTime="2025-10-26 15:13:17.922864291 +0000 UTC m=+1.177937400"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.945355    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-450976" podStartSLOduration=1.945329338 podStartE2EDuration="1.945329338s" podCreationTimestamp="2025-10-26 15:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:17.932740877 +0000 UTC m=+1.187813986" watchObservedRunningTime="2025-10-26 15:13:17.945329338 +0000 UTC m=+1.200402447"
	Oct 26 15:13:17 newest-cni-450976 kubelet[1296]: I1026 15:13:17.945455    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-450976" podStartSLOduration=1.9454492270000001 podStartE2EDuration="1.945449227s" podCreationTimestamp="2025-10-26 15:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:17.945069582 +0000 UTC m=+1.200142695" watchObservedRunningTime="2025-10-26 15:13:17.945449227 +0000 UTC m=+1.200522338"
	Oct 26 15:13:21 newest-cni-450976 kubelet[1296]: I1026 15:13:21.272506    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 15:13:21 newest-cni-450976 kubelet[1296]: I1026 15:13:21.273367    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.356975    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6ade61f-e6fb-4746-9b65-ce10129cd53e-xtables-lock\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.357042    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn8p5\" (UniqueName: \"kubernetes.io/projected/d6ade61f-e6fb-4746-9b65-ce10129cd53e-kube-api-access-jn8p5\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.357075    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e6c6e48-eb1f-4a31-9cf4-390096851e53-kube-proxy\") pod \"kube-proxy-jfm7b\" (UID: \"6e6c6e48-eb1f-4a31-9cf4-390096851e53\") " pod="kube-system/kube-proxy-jfm7b"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.357104    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d6ade61f-e6fb-4746-9b65-ce10129cd53e-cni-cfg\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.357123    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e6c6e48-eb1f-4a31-9cf4-390096851e53-xtables-lock\") pod \"kube-proxy-jfm7b\" (UID: \"6e6c6e48-eb1f-4a31-9cf4-390096851e53\") " pod="kube-system/kube-proxy-jfm7b"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.357648    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e6c6e48-eb1f-4a31-9cf4-390096851e53-lib-modules\") pod \"kube-proxy-jfm7b\" (UID: \"6e6c6e48-eb1f-4a31-9cf4-390096851e53\") " pod="kube-system/kube-proxy-jfm7b"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.358077    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6ade61f-e6fb-4746-9b65-ce10129cd53e-lib-modules\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.358124    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncc9w\" (UniqueName: \"kubernetes.io/projected/6e6c6e48-eb1f-4a31-9cf4-390096851e53-kube-api-access-ncc9w\") pod \"kube-proxy-jfm7b\" (UID: \"6e6c6e48-eb1f-4a31-9cf4-390096851e53\") " pod="kube-system/kube-proxy-jfm7b"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.902737    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9tqxv" podStartSLOduration=0.902714203 podStartE2EDuration="902.714203ms" podCreationTimestamp="2025-10-26 15:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:22.902620421 +0000 UTC m=+6.157693544" watchObservedRunningTime="2025-10-26 15:13:22.902714203 +0000 UTC m=+6.157787313"
	Oct 26 15:13:22 newest-cni-450976 kubelet[1296]: I1026 15:13:22.903120    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jfm7b" podStartSLOduration=0.903093661 podStartE2EDuration="903.093661ms" podCreationTimestamp="2025-10-26 15:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:22.886105959 +0000 UTC m=+6.141179067" watchObservedRunningTime="2025-10-26 15:13:22.903093661 +0000 UTC m=+6.158166770"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-450976 -n newest-cni-450976
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-450976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-7jwrr storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-450976 describe pod coredns-66bc5c9577-7jwrr storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-450976 describe pod coredns-66bc5c9577-7jwrr storage-provisioner: exit status 1 (73.266203ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-7jwrr" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-450976 describe pod coredns-66bc5c9577-7jwrr storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-790012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-790012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (378.759646ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-790012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-790012 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-790012 describe deploy/metrics-server -n kube-system: exit status 1 (117.45599ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-790012 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-790012
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-790012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a",
	        "Created": "2025-10-26T15:12:52.819696195Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1102480,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:12:52.861526743Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/hosts",
	        "LogPath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a-json.log",
	        "Name": "/default-k8s-diff-port-790012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-790012:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-790012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a",
	                "LowerDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-790012",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-790012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-790012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-790012",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-790012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52906b8b4392ad3e01f60735102bffe9b3b74423dc39fc1547deae934ba65548",
	            "SandboxKey": "/var/run/docker/netns/52906b8b4392",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33851"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33850"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-790012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:bd:25:ce:56:43",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb8db690bfd734c5a8c0b627f3759fdde408bba40a95fd914967f52dd3a0e0bf",
	                    "EndpointID": "add90278ed86aad157ff442ffc3f612fd7d8edff6d20523e374f8633cdc3da5b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-790012",
	                        "f2c26d088cf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-790012 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-790012 logs -n 25: (1.318999726s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-176599                                                                                                                                                                                                                  │ kubernetes-upgrade-176599    │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ image   │ old-k8s-version-330914 image list --format=json                                                                                                                                                                                               │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p old-k8s-version-330914 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ image   │ no-preload-475081 image list --format=json                                                                                                                                                                                                    │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p no-preload-475081 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p disable-driver-mounts-619402                                                                                                                                                                                                               │ disable-driver-mounts-619402 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p cert-expiration-619245 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ delete  │ -p cert-expiration-619245                                                                                                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p auto-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-498531                  │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-535130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p embed-certs-535130 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p newest-cni-450976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p newest-cni-450976 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-535130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-450976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-790012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:13:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:13:33.334804 1114752 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:13:33.335030 1114752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:33.335037 1114752 out.go:374] Setting ErrFile to fd 2...
	I1026 15:13:33.335041 1114752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:33.335275 1114752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:13:33.335717 1114752 out.go:368] Setting JSON to false
	I1026 15:13:33.336864 1114752 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10561,"bootTime":1761481052,"procs":382,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:13:33.336965 1114752 start.go:141] virtualization: kvm guest
	I1026 15:13:33.338732 1114752 out.go:179] * [newest-cni-450976] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:13:33.340086 1114752 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:13:33.340115 1114752 notify.go:220] Checking for updates...
	I1026 15:13:33.342297 1114752 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:13:33.343663 1114752 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:33.344846 1114752 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:13:33.346031 1114752 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:13:33.347279 1114752 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:13:33.349221 1114752 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:33.349915 1114752 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:13:33.376031 1114752 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:13:33.376129 1114752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:33.438088 1114752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-26 15:13:33.426631481 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:33.438228 1114752 docker.go:318] overlay module found
	I1026 15:13:33.440047 1114752 out.go:179] * Using the docker driver based on existing profile
	I1026 15:13:33.441532 1114752 start.go:305] selected driver: docker
	I1026 15:13:33.441548 1114752 start.go:925] validating driver "docker" against &{Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:33.441657 1114752 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:13:33.442266 1114752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:33.505289 1114752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-26 15:13:33.494889004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:33.505603 1114752 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:13:33.505638 1114752 cni.go:84] Creating CNI manager for ""
	I1026 15:13:33.505687 1114752 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:33.505724 1114752 start.go:349] cluster config:
	{Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:33.508668 1114752 out.go:179] * Starting "newest-cni-450976" primary control-plane node in "newest-cni-450976" cluster
	I1026 15:13:33.510071 1114752 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:13:33.511479 1114752 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:13:33.512708 1114752 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:33.512753 1114752 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:13:33.512777 1114752 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:13:33.512801 1114752 cache.go:58] Caching tarball of preloaded images
	I1026 15:13:33.512888 1114752 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:13:33.512898 1114752 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:13:33.512995 1114752 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/config.json ...
	I1026 15:13:33.534783 1114752 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:13:33.534810 1114752 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:13:33.534834 1114752 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:13:33.534873 1114752 start.go:360] acquireMachinesLock for newest-cni-450976: {Name:mkd25f5c88d69734bd3a1425b2ee7adeba19f996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:13:33.534945 1114752 start.go:364] duration metric: took 46.831µs to acquireMachinesLock for "newest-cni-450976"
	I1026 15:13:33.534970 1114752 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:13:33.534980 1114752 fix.go:54] fixHost starting: 
	I1026 15:13:33.535289 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:33.554995 1114752 fix.go:112] recreateIfNeeded on newest-cni-450976: state=Stopped err=<nil>
	W1026 15:13:33.555041 1114752 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:13:31.934443 1100384 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:31.934495 1100384 system_pods.go:89] "coredns-66bc5c9577-shw6l" [34b47d5d-504d-4f7a-905e-acd0787bad18] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:31.934506 1100384 system_pods.go:89] "etcd-default-k8s-diff-port-790012" [18a43e2a-b91b-4b24-a5f6-4ce939ee4840] Running
	I1026 15:13:31.934515 1100384 system_pods.go:89] "kindnet-7ch5r" [54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17] Running
	I1026 15:13:31.934521 1100384 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-790012" [cdf846a0-22e6-4261-abdc-bd5f72bdbc80] Running
	I1026 15:13:31.934528 1100384 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-790012" [4e9cad9b-4439-4d70-98c2-10b7fcd16c25] Running
	I1026 15:13:31.934533 1100384 system_pods.go:89] "kube-proxy-wk2nn" [928b7499-0464-4469-9f74-0e72935a8464] Running
	I1026 15:13:31.934539 1100384 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-790012" [80d7b5ad-decf-4b5f-a03f-4f63aed757a1] Running
	I1026 15:13:31.934547 1100384 system_pods.go:89] "storage-provisioner" [1f95e80f-9f93-44c4-b761-fd518de0c4d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:31.934574 1100384 retry.go:31] will retry after 343.738043ms: missing components: kube-dns
	I1026 15:13:32.282807 1100384 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:32.282849 1100384 system_pods.go:89] "coredns-66bc5c9577-shw6l" [34b47d5d-504d-4f7a-905e-acd0787bad18] Running
	I1026 15:13:32.282858 1100384 system_pods.go:89] "etcd-default-k8s-diff-port-790012" [18a43e2a-b91b-4b24-a5f6-4ce939ee4840] Running
	I1026 15:13:32.282865 1100384 system_pods.go:89] "kindnet-7ch5r" [54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17] Running
	I1026 15:13:32.282871 1100384 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-790012" [cdf846a0-22e6-4261-abdc-bd5f72bdbc80] Running
	I1026 15:13:32.282875 1100384 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-790012" [4e9cad9b-4439-4d70-98c2-10b7fcd16c25] Running
	I1026 15:13:32.282878 1100384 system_pods.go:89] "kube-proxy-wk2nn" [928b7499-0464-4469-9f74-0e72935a8464] Running
	I1026 15:13:32.282881 1100384 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-790012" [80d7b5ad-decf-4b5f-a03f-4f63aed757a1] Running
	I1026 15:13:32.282886 1100384 system_pods.go:89] "storage-provisioner" [1f95e80f-9f93-44c4-b761-fd518de0c4d9] Running
	I1026 15:13:32.282897 1100384 system_pods.go:126] duration metric: took 891.5938ms to wait for k8s-apps to be running ...
	I1026 15:13:32.282914 1100384 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:13:32.282969 1100384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:32.296468 1100384 system_svc.go:56] duration metric: took 13.54263ms WaitForService to wait for kubelet
	I1026 15:13:32.296504 1100384 kubeadm.go:586] duration metric: took 12.759102603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:32.296526 1100384 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:32.299850 1100384 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:13:32.299878 1100384 node_conditions.go:123] node cpu capacity is 8
	I1026 15:13:32.299894 1100384 node_conditions.go:105] duration metric: took 3.363088ms to run NodePressure ...
	I1026 15:13:32.299907 1100384 start.go:241] waiting for startup goroutines ...
	I1026 15:13:32.299914 1100384 start.go:246] waiting for cluster config update ...
	I1026 15:13:32.299924 1100384 start.go:255] writing updated cluster config ...
	I1026 15:13:32.300234 1100384 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:32.304335 1100384 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:32.307473 1100384 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-shw6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.312359 1100384 pod_ready.go:94] pod "coredns-66bc5c9577-shw6l" is "Ready"
	I1026 15:13:32.312384 1100384 pod_ready.go:86] duration metric: took 4.8862ms for pod "coredns-66bc5c9577-shw6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.314462 1100384 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.318273 1100384 pod_ready.go:94] pod "etcd-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:32.318297 1100384 pod_ready.go:86] duration metric: took 3.808174ms for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.320377 1100384 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.324152 1100384 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:32.324183 1100384 pod_ready.go:86] duration metric: took 3.787572ms for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.325956 1100384 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.708420 1100384 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:32.708454 1100384 pod_ready.go:86] duration metric: took 382.476768ms for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.908230 1100384 pod_ready.go:83] waiting for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.308766 1100384 pod_ready.go:94] pod "kube-proxy-wk2nn" is "Ready"
	I1026 15:13:33.308793 1100384 pod_ready.go:86] duration metric: took 400.537302ms for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.509496 1100384 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.908459 1100384 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:33.908489 1100384 pod_ready.go:86] duration metric: took 398.969559ms for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.908501 1100384 pod_ready.go:40] duration metric: took 1.604136935s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:33.958143 1100384 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:13:33.963345 1100384 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-790012" cluster and "default" namespace by default
	I1026 15:13:31.698014 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:32.198139 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:32.698359 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:33.197334 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:33.697439 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:34.198261 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:34.697736 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:34.769063 1107827 kubeadm.go:1113] duration metric: took 4.175520669s to wait for elevateKubeSystemPrivileges
	I1026 15:13:34.769102 1107827 kubeadm.go:402] duration metric: took 16.397307608s to StartCluster
	I1026 15:13:34.769127 1107827 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:34.769225 1107827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:34.770585 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:34.770908 1107827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:13:34.770943 1107827 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:34.770916 1107827 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:34.771042 1107827 addons.go:69] Setting default-storageclass=true in profile "auto-498531"
	I1026 15:13:34.771064 1107827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-498531"
	I1026 15:13:34.771123 1107827 config.go:182] Loaded profile config "auto-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:34.771035 1107827 addons.go:69] Setting storage-provisioner=true in profile "auto-498531"
	I1026 15:13:34.771231 1107827 addons.go:238] Setting addon storage-provisioner=true in "auto-498531"
	I1026 15:13:34.771262 1107827 host.go:66] Checking if "auto-498531" exists ...
	I1026 15:13:34.771577 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:34.771772 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:34.776443 1107827 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:34.777765 1107827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:34.799695 1107827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:34.799746 1107827 addons.go:238] Setting addon default-storageclass=true in "auto-498531"
	I1026 15:13:34.799799 1107827 host.go:66] Checking if "auto-498531" exists ...
	I1026 15:13:34.800417 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:34.801236 1107827 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:34.801257 1107827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:34.801312 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:34.829194 1107827 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:34.829225 1107827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:34.829294 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:34.832250 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:34.854598 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:34.869079 1107827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:13:34.927602 1107827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:34.952284 1107827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:34.970067 1107827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:35.045061 1107827 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1026 15:13:35.046332 1107827 node_ready.go:35] waiting up to 15m0s for node "auto-498531" to be "Ready" ...
	I1026 15:13:35.313398 1107827 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:13:31.389560 1113766 out.go:252] * Restarting existing docker container for "embed-certs-535130" ...
	I1026 15:13:31.389635 1113766 cli_runner.go:164] Run: docker start embed-certs-535130
	I1026 15:13:31.660384 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:31.678708 1113766 kic.go:430] container "embed-certs-535130" state is running.
	I1026 15:13:31.679126 1113766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:13:31.697538 1113766 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json ...
	I1026 15:13:31.697945 1113766 machine.go:93] provisionDockerMachine start ...
	I1026 15:13:31.698059 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:31.718923 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:31.719190 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:31.719210 1113766 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:13:31.720103 1113766 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48182->127.0.0.1:33862: read: connection reset by peer
	I1026 15:13:34.882840 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:13:34.882880 1113766 ubuntu.go:182] provisioning hostname "embed-certs-535130"
	I1026 15:13:34.882953 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:34.905938 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:34.906301 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:34.906322 1113766 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-535130 && echo "embed-certs-535130" | sudo tee /etc/hostname
	I1026 15:13:35.076003 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:13:35.076117 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:35.103101 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:35.103427 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:35.103450 1113766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-535130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-535130/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-535130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:13:35.256045 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:13:35.256087 1113766 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:13:35.256116 1113766 ubuntu.go:190] setting up certificates
	I1026 15:13:35.256132 1113766 provision.go:84] configureAuth start
	I1026 15:13:35.256217 1113766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:13:35.279777 1113766 provision.go:143] copyHostCerts
	I1026 15:13:35.279863 1113766 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:13:35.279881 1113766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:13:35.279958 1113766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:13:35.280106 1113766 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:13:35.280124 1113766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:13:35.280197 1113766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:13:35.280306 1113766 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:13:35.280314 1113766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:13:35.280352 1113766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:13:35.280449 1113766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.embed-certs-535130 san=[127.0.0.1 192.168.76.2 embed-certs-535130 localhost minikube]
	I1026 15:13:35.849277 1113766 provision.go:177] copyRemoteCerts
	I1026 15:13:35.849339 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:13:35.849383 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:35.868289 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:35.970503 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 15:13:35.989799 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:13:36.009306 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:13:36.028885 1113766 provision.go:87] duration metric: took 772.732042ms to configureAuth
	I1026 15:13:36.028916 1113766 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:13:36.029146 1113766 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:36.029314 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.049872 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:36.050196 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:36.050225 1113766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:13:35.314522 1107827 addons.go:514] duration metric: took 543.574864ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:13:35.550021 1107827 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-498531" context rescaled to 1 replicas
	I1026 15:13:36.367622 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:13:36.367650 1113766 machine.go:96] duration metric: took 4.669682302s to provisionDockerMachine
	I1026 15:13:36.367675 1113766 start.go:293] postStartSetup for "embed-certs-535130" (driver="docker")
	I1026 15:13:36.367689 1113766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:13:36.367750 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:13:36.367797 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.388448 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.492995 1113766 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:13:36.496912 1113766 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:13:36.496990 1113766 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:13:36.497005 1113766 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:13:36.497441 1113766 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:13:36.497581 1113766 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:13:36.497738 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:13:36.506836 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:36.525298 1113766 start.go:296] duration metric: took 157.60468ms for postStartSetup
	I1026 15:13:36.525405 1113766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:13:36.525460 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.544951 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.644413 1113766 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:13:36.649341 1113766 fix.go:56] duration metric: took 5.281758238s for fixHost
	I1026 15:13:36.649370 1113766 start.go:83] releasing machines lock for "embed-certs-535130", held for 5.281812223s
	I1026 15:13:36.649447 1113766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:13:36.667811 1113766 ssh_runner.go:195] Run: cat /version.json
	I1026 15:13:36.667869 1113766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:13:36.667877 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.667930 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.687798 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.688085 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.842869 1113766 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:36.849931 1113766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:13:36.885592 1113766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:13:36.890858 1113766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:13:36.890935 1113766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:13:36.899349 1113766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:13:36.899377 1113766 start.go:495] detecting cgroup driver to use...
	I1026 15:13:36.899413 1113766 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:13:36.899462 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:13:36.915265 1113766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:13:36.928368 1113766 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:13:36.928419 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:13:36.943985 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:13:36.957590 1113766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:13:37.049991 1113766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:13:37.136299 1113766 docker.go:234] disabling docker service ...
	I1026 15:13:37.136360 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:13:37.151928 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:13:37.165026 1113766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:13:37.251238 1113766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:13:37.336342 1113766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:13:37.348920 1113766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:13:37.365703 1113766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:13:37.365769 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.375313 1113766 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:13:37.375377 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.385150 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.394723 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.404588 1113766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:13:37.413415 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.423296 1113766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.432517 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.441828 1113766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:13:37.449865 1113766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:13:37.457461 1113766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:37.547638 1113766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:37.664727 1113766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:37.664798 1113766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:37.668861 1113766 start.go:563] Will wait 60s for crictl version
	I1026 15:13:37.668919 1113766 ssh_runner.go:195] Run: which crictl
	I1026 15:13:37.672511 1113766 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:13:37.701474 1113766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:13:37.701556 1113766 ssh_runner.go:195] Run: crio --version
	I1026 15:13:37.731543 1113766 ssh_runner.go:195] Run: crio --version
	I1026 15:13:37.765561 1113766 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:13:33.556906 1114752 out.go:252] * Restarting existing docker container for "newest-cni-450976" ...
	I1026 15:13:33.556988 1114752 cli_runner.go:164] Run: docker start newest-cni-450976
	I1026 15:13:33.822470 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:33.842102 1114752 kic.go:430] container "newest-cni-450976" state is running.
	I1026 15:13:33.842808 1114752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450976
	I1026 15:13:33.863064 1114752 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/config.json ...
	I1026 15:13:33.863323 1114752 machine.go:93] provisionDockerMachine start ...
	I1026 15:13:33.863396 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:33.884364 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:33.884687 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:33.884704 1114752 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:13:33.885475 1114752 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35906->127.0.0.1:33867: read: connection reset by peer
	I1026 15:13:37.031343 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-450976
	
	I1026 15:13:37.031380 1114752 ubuntu.go:182] provisioning hostname "newest-cni-450976"
	I1026 15:13:37.031446 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.051564 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:37.051811 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:37.051826 1114752 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-450976 && echo "newest-cni-450976" | sudo tee /etc/hostname
	I1026 15:13:37.213333 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-450976
	
	I1026 15:13:37.213420 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.231946 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:37.232310 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:37.232342 1114752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450976/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:13:37.380632 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:13:37.380665 1114752 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:13:37.380705 1114752 ubuntu.go:190] setting up certificates
	I1026 15:13:37.380727 1114752 provision.go:84] configureAuth start
	I1026 15:13:37.380796 1114752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450976
	I1026 15:13:37.399725 1114752 provision.go:143] copyHostCerts
	I1026 15:13:37.399829 1114752 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:13:37.399846 1114752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:13:37.399931 1114752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:13:37.400150 1114752 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:13:37.400182 1114752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:13:37.400227 1114752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:13:37.400369 1114752 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:13:37.400382 1114752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:13:37.400421 1114752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:13:37.400512 1114752 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450976 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-450976]
	I1026 15:13:37.763701 1114752 provision.go:177] copyRemoteCerts
	I1026 15:13:37.763767 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:13:37.763819 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.783049 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:37.887525 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:13:37.906903 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:13:37.926587 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:13:37.944386 1114752 provision.go:87] duration metric: took 563.640766ms to configureAuth
	I1026 15:13:37.944414 1114752 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:13:37.944614 1114752 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:37.944731 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.964140 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:37.964409 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:37.964428 1114752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:13:38.255873 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:13:38.255901 1114752 machine.go:96] duration metric: took 4.392559982s to provisionDockerMachine
	I1026 15:13:38.255917 1114752 start.go:293] postStartSetup for "newest-cni-450976" (driver="docker")
	I1026 15:13:38.255931 1114752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:13:38.256000 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:13:38.256055 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.275739 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:37.766706 1113766 cli_runner.go:164] Run: docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:37.785593 1113766 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:37.789819 1113766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:37.800845 1113766 kubeadm.go:883] updating cluster {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:37.801020 1113766 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:37.801095 1113766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:37.834876 1113766 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:37.834902 1113766 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:37.834962 1113766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:37.861255 1113766 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:37.861279 1113766 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:37.861322 1113766 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:13:37.861435 1113766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-535130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:37.861503 1113766 ssh_runner.go:195] Run: crio config
	I1026 15:13:37.912692 1113766 cni.go:84] Creating CNI manager for ""
	I1026 15:13:37.912714 1113766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:37.912747 1113766 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:13:37.912784 1113766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-535130 NodeName:embed-certs-535130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:37.912927 1113766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-535130"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:37.913000 1113766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:13:37.921351 1113766 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:37.921430 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:37.929571 1113766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:13:37.942588 1113766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:37.955865 1113766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:13:37.970133 1113766 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:37.974032 1113766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:37.985196 1113766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:38.073069 1113766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:38.095947 1113766 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130 for IP: 192.168.76.2
	I1026 15:13:38.095969 1113766 certs.go:195] generating shared ca certs ...
	I1026 15:13:38.095990 1113766 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.096157 1113766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:13:38.096247 1113766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:13:38.096263 1113766 certs.go:257] generating profile certs ...
	I1026 15:13:38.096402 1113766 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key
	I1026 15:13:38.096505 1113766 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3
	I1026 15:13:38.096557 1113766 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key
	I1026 15:13:38.096790 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:13:38.096865 1113766 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:38.096882 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:13:38.096913 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:38.096948 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:38.096970 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:38.097027 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:38.097985 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:38.117316 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:13:38.141746 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:38.162963 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:13:38.188391 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:13:38.209813 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:13:38.228846 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:38.247538 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:13:38.267934 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:13:38.287253 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:13:38.306737 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:38.325248 1113766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:38.338026 1113766 ssh_runner.go:195] Run: openssl version
	I1026 15:13:38.344312 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:13:38.353974 1113766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:13:38.358501 1113766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:13:38.358573 1113766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:13:38.395847 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:38.404522 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:13:38.414054 1113766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:13:38.418460 1113766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:13:38.418516 1113766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:13:38.454059 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:38.462770 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:38.471399 1113766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:38.475250 1113766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:38.475300 1113766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:38.510924 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:38.519486 1113766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:38.523384 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:13:38.561625 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:13:38.601091 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:13:38.651816 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:13:38.697241 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:13:38.754098 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:13:38.813908 1113766 kubeadm.go:400] StartCluster: {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:38.814039 1113766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:38.814105 1113766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:38.850213 1113766 cri.go:89] found id: "79f294b1af5377dbbe09bff36c0ce752c337fff26f468f52ba372eeae7c2fbd7"
	I1026 15:13:38.850237 1113766 cri.go:89] found id: "0cf664b8ea8fd4397a4e4d0903d086cb617b472ad1631050bc542a9e5c06ca09"
	I1026 15:13:38.850243 1113766 cri.go:89] found id: "43565d9e1913984f12b45a1203fca769c7b760ccf18830408972ff108c39b9bf"
	I1026 15:13:38.850248 1113766 cri.go:89] found id: "7f30d07b339ab7331f72cd45f5f34ee9c7eb82bec1197a77db9c34d2fcb6c24b"
	I1026 15:13:38.850252 1113766 cri.go:89] found id: ""
	I1026 15:13:38.850291 1113766 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:13:38.865515 1113766 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:38Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:38.865607 1113766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:13:38.878426 1113766 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:13:38.878485 1113766 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:13:38.878632 1113766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:13:38.890199 1113766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:13:38.891157 1113766 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-535130" does not appear in /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:38.891705 1113766 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-841519/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-535130" cluster setting kubeconfig missing "embed-certs-535130" context setting]
	I1026 15:13:38.892512 1113766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.894605 1113766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:13:38.904931 1113766 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 15:13:38.904968 1113766 kubeadm.go:601] duration metric: took 26.475851ms to restartPrimaryControlPlane
	I1026 15:13:38.904979 1113766 kubeadm.go:402] duration metric: took 91.083527ms to StartCluster
	I1026 15:13:38.904999 1113766 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.905074 1113766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:38.907087 1113766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.907395 1113766 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:38.907661 1113766 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:38.907720 1113766 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:38.907797 1113766 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-535130"
	I1026 15:13:38.907828 1113766 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-535130"
	W1026 15:13:38.907836 1113766 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:13:38.907864 1113766 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:13:38.908130 1113766 addons.go:69] Setting dashboard=true in profile "embed-certs-535130"
	I1026 15:13:38.908184 1113766 addons.go:238] Setting addon dashboard=true in "embed-certs-535130"
	W1026 15:13:38.908193 1113766 addons.go:247] addon dashboard should already be in state true
	I1026 15:13:38.908220 1113766 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:13:38.908373 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.908397 1113766 addons.go:69] Setting default-storageclass=true in profile "embed-certs-535130"
	I1026 15:13:38.908423 1113766 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-535130"
	I1026 15:13:38.908708 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.908740 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.910199 1113766 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:38.912350 1113766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:38.938355 1113766 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:13:38.940173 1113766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:38.940965 1113766 addons.go:238] Setting addon default-storageclass=true in "embed-certs-535130"
	W1026 15:13:38.940987 1113766 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:13:38.941016 1113766 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:13:38.941438 1113766 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:13:38.376923 1114752 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:13:38.380788 1114752 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:13:38.380831 1114752 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:13:38.380846 1114752 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:13:38.380907 1114752 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:13:38.381022 1114752 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:13:38.381143 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:13:38.389543 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:38.409203 1114752 start.go:296] duration metric: took 153.266796ms for postStartSetup
	I1026 15:13:38.409313 1114752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:13:38.409379 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.429864 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:38.528464 1114752 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:13:38.533391 1114752 fix.go:56] duration metric: took 4.998404293s for fixHost
	I1026 15:13:38.533463 1114752 start.go:83] releasing machines lock for "newest-cni-450976", held for 4.998465392s
	I1026 15:13:38.533543 1114752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450976
	I1026 15:13:38.553571 1114752 ssh_runner.go:195] Run: cat /version.json
	I1026 15:13:38.553643 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.553654 1114752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:13:38.553767 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.574284 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:38.574504 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:38.676051 1114752 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:38.758754 1114752 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:13:38.813141 1114752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:13:38.819297 1114752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:13:38.819357 1114752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:13:38.830001 1114752 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:13:38.830033 1114752 start.go:495] detecting cgroup driver to use...
	I1026 15:13:38.830069 1114752 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:13:38.830116 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:13:38.850256 1114752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:13:38.868194 1114752 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:13:38.868253 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:13:38.891101 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:13:38.910824 1114752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:13:39.062926 1114752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:13:39.179139 1114752 docker.go:234] disabling docker service ...
	I1026 15:13:39.179229 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:13:39.201712 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:13:39.223985 1114752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:13:39.330872 1114752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:13:39.440045 1114752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:13:39.456995 1114752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:13:39.475173 1114752 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:13:39.475235 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.485838 1114752 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:13:39.485911 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.497890 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.509311 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.521401 1114752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:13:39.531708 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.545553 1114752 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.558867 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.572132 1114752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:13:39.582550 1114752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:13:39.592870 1114752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:39.728450 1114752 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:39.862260 1114752 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:39.862332 1114752 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:39.867333 1114752 start.go:563] Will wait 60s for crictl version
	I1026 15:13:39.867406 1114752 ssh_runner.go:195] Run: which crictl
	I1026 15:13:39.872243 1114752 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:13:39.903804 1114752 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:13:39.903885 1114752 ssh_runner.go:195] Run: crio --version
	I1026 15:13:39.940255 1114752 ssh_runner.go:195] Run: crio --version
	I1026 15:13:39.980141 1114752 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:13:39.981402 1114752 cli_runner.go:164] Run: docker network inspect newest-cni-450976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:40.009905 1114752 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:40.014650 1114752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:40.027719 1114752 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:13:38.941506 1113766 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:38.941523 1113766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:38.941613 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:38.941532 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.942632 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:13:38.942653 1113766 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:13:38.942702 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:38.976439 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:38.978129 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:38.980802 1113766 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:38.980863 1113766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:38.981009 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:39.014150 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:39.104677 1113766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:39.122534 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:13:39.122561 1113766 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:13:39.123493 1113766 node_ready.go:35] waiting up to 6m0s for node "embed-certs-535130" to be "Ready" ...
	I1026 15:13:39.128461 1113766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:39.137116 1113766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:39.143559 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:13:39.143586 1113766 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:13:39.164258 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:13:39.164287 1113766 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:13:39.185403 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:13:39.185487 1113766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:13:39.204860 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:13:39.204885 1113766 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:13:39.231456 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:13:39.231485 1113766 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:13:39.247596 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:13:39.247622 1113766 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:13:39.268795 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:13:39.268827 1113766 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:13:39.285975 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:39.286003 1113766 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:13:39.300204 1113766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:40.480787 1113766 node_ready.go:49] node "embed-certs-535130" is "Ready"
	I1026 15:13:40.480821 1113766 node_ready.go:38] duration metric: took 1.357286103s for node "embed-certs-535130" to be "Ready" ...
	I1026 15:13:40.480838 1113766 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:13:40.480891 1113766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:13:41.063718 1113766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.926561303s)
	I1026 15:13:41.064077 1113766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.763825551s)
	I1026 15:13:41.064352 1113766 api_server.go:72] duration metric: took 2.156917709s to wait for apiserver process to appear ...
	I1026 15:13:41.064364 1113766 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:13:41.064384 1113766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:13:41.066580 1113766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.938084789s)
	I1026 15:13:41.068720 1113766 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-535130 addons enable metrics-server
	
	I1026 15:13:41.072450 1113766 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:13:41.072475 1113766 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:13:41.079563 1113766 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:13:41.081450 1113766 addons.go:514] duration metric: took 2.1737229s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:13:40.028927 1114752 kubeadm.go:883] updating cluster {Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:40.029111 1114752 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:40.029202 1114752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:40.066752 1114752 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:40.066779 1114752 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:40.066837 1114752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:40.095689 1114752 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:40.095711 1114752 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:40.095719 1114752 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1026 15:13:40.095834 1114752 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-450976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:40.095896 1114752 ssh_runner.go:195] Run: crio config
	I1026 15:13:40.174353 1114752 cni.go:84] Creating CNI manager for ""
	I1026 15:13:40.174388 1114752 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:40.174417 1114752 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:13:40.174447 1114752 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450976 NodeName:newest-cni-450976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:40.174628 1114752 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-450976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:40.174714 1114752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:13:40.185063 1114752 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:40.185142 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:40.193834 1114752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:13:40.207803 1114752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:40.221135 1114752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:13:40.235918 1114752 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:40.239959 1114752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:40.256497 1114752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:40.359653 1114752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:40.395140 1114752 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976 for IP: 192.168.103.2
	I1026 15:13:40.395205 1114752 certs.go:195] generating shared ca certs ...
	I1026 15:13:40.395229 1114752 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:40.395390 1114752 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:13:40.395438 1114752 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:13:40.395452 1114752 certs.go:257] generating profile certs ...
	I1026 15:13:40.395587 1114752 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/client.key
	I1026 15:13:40.395677 1114752 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/apiserver.key.6904aab9
	I1026 15:13:40.395726 1114752 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/proxy-client.key
	I1026 15:13:40.395894 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:13:40.395936 1114752 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:40.395950 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:13:40.395985 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:40.396018 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:40.396050 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:40.396105 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:40.396848 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:40.428740 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:13:40.467100 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:40.505682 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:13:40.537741 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:13:40.570121 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:13:40.595584 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:40.623177 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:13:40.644134 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:13:40.667283 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:13:40.688417 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:40.708044 1114752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:40.721964 1114752 ssh_runner.go:195] Run: openssl version
	I1026 15:13:40.729493 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:40.740489 1114752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:40.745099 1114752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:40.745235 1114752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:40.783506 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:40.793310 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:13:40.803928 1114752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:13:40.808231 1114752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:13:40.808294 1114752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:13:40.855542 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:40.865852 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:13:40.876943 1114752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:13:40.881960 1114752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:13:40.882035 1114752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:13:40.930751 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:40.941036 1114752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:40.946656 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:13:40.990223 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:13:41.044589 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:13:41.095556 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:13:41.150431 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:13:41.203945 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:13:41.260135 1114752 kubeadm.go:400] StartCluster: {Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:41.260282 1114752 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:41.260381 1114752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:41.312408 1114752 cri.go:89] found id: "d301b19a9754fef9062ff0ab32cef39843a3b341f9c9c9c979ce50772e060f34"
	I1026 15:13:41.312495 1114752 cri.go:89] found id: "eca31c4960e5fee40ff7a27e80d78ba23e050229040a9c119c1a39d6d964c134"
	I1026 15:13:41.312506 1114752 cri.go:89] found id: "7b4821416cdb1f5a1c75031b5a1a9853efa078e8f2964c61061e443a8fe518d0"
	I1026 15:13:41.312512 1114752 cri.go:89] found id: "dad7b5a044afb9affbe248c4fce4bf89b73634fb0298fd50fe83199eecb4779f"
	I1026 15:13:41.312516 1114752 cri.go:89] found id: ""
	I1026 15:13:41.312586 1114752 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:13:41.328414 1114752 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:41Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:41.328490 1114752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:13:41.339143 1114752 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:13:41.339202 1114752 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:13:41.339274 1114752 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:13:41.349811 1114752 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:13:41.351328 1114752 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-450976" does not appear in /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:41.352331 1114752 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-841519/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-450976" cluster setting kubeconfig missing "newest-cni-450976" context setting]
	I1026 15:13:41.353686 1114752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:41.356482 1114752 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:13:41.368106 1114752 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1026 15:13:41.368228 1114752 kubeadm.go:601] duration metric: took 29.01603ms to restartPrimaryControlPlane
	I1026 15:13:41.368248 1114752 kubeadm.go:402] duration metric: took 108.140463ms to StartCluster
	I1026 15:13:41.368309 1114752 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:41.368403 1114752 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:41.371525 1114752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:41.371844 1114752 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:41.371893 1114752 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:41.371998 1114752 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-450976"
	I1026 15:13:41.372027 1114752 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-450976"
	W1026 15:13:41.372049 1114752 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:13:41.372062 1114752 addons.go:69] Setting dashboard=true in profile "newest-cni-450976"
	I1026 15:13:41.372077 1114752 addons.go:238] Setting addon dashboard=true in "newest-cni-450976"
	I1026 15:13:41.372081 1114752 host.go:66] Checking if "newest-cni-450976" exists ...
	W1026 15:13:41.372084 1114752 addons.go:247] addon dashboard should already be in state true
	I1026 15:13:41.372094 1114752 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:41.372107 1114752 host.go:66] Checking if "newest-cni-450976" exists ...
	I1026 15:13:41.372146 1114752 addons.go:69] Setting default-storageclass=true in profile "newest-cni-450976"
	I1026 15:13:41.372184 1114752 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-450976"
	I1026 15:13:41.372469 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.372627 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.372627 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.375710 1114752 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:41.377073 1114752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:41.403083 1114752 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:13:41.403092 1114752 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:41.404381 1114752 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:41.404403 1114752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:41.404443 1114752 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1026 15:13:37.050036 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	W1026 15:13:39.051344 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	W1026 15:13:41.550747 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 26 15:13:31 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:31.483427019Z" level=info msg="Starting container: 5258d16e2d40d1013a5e5c71a95cf0dd28f468e0d789b886d5652469cf1c5a17" id=d8c60310-2c4e-4dc9-90cf-c152d88cf3b8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:31 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:31.485903855Z" level=info msg="Started container" PID=1848 containerID=5258d16e2d40d1013a5e5c71a95cf0dd28f468e0d789b886d5652469cf1c5a17 description=kube-system/coredns-66bc5c9577-shw6l/coredns id=d8c60310-2c4e-4dc9-90cf-c152d88cf3b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4cf9905fd937ac60bde54809cd5500200b10d3e10fe3a22f58737e4214e9579
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.437257791Z" level=info msg="Running pod sandbox: default/busybox/POD" id=09039e5b-a792-4ec6-8913-11e461b506e7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.437368876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.44223909Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e6fdcce03d1e3d11bcfce0641c90f313935aab8cf6e37846ac3f2dd43d6d339c UID:e7ac2f4d-99cd-4d92-9325-8c4c3b4aeac4 NetNS:/var/run/netns/9495a92f-1a7f-406b-aa3d-912242a05269 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad30}] Aliases:map[]}"
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.442270469Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.452052387Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e6fdcce03d1e3d11bcfce0641c90f313935aab8cf6e37846ac3f2dd43d6d339c UID:e7ac2f4d-99cd-4d92-9325-8c4c3b4aeac4 NetNS:/var/run/netns/9495a92f-1a7f-406b-aa3d-912242a05269 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad30}] Aliases:map[]}"
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.452205375Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.452951337Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.453907341Z" level=info msg="Ran pod sandbox e6fdcce03d1e3d11bcfce0641c90f313935aab8cf6e37846ac3f2dd43d6d339c with infra container: default/busybox/POD" id=09039e5b-a792-4ec6-8913-11e461b506e7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.455312948Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d0c6c1de-25e5-4b7b-b568-0b31515a4ba2 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.455427642Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d0c6c1de-25e5-4b7b-b568-0b31515a4ba2 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.455462867Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d0c6c1de-25e5-4b7b-b568-0b31515a4ba2 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.456138638Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=59112dd7-ac18-43e9-be42-cf3820e7b385 name=/runtime.v1.ImageService/PullImage
	Oct 26 15:13:34 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:34.459132842Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.236821552Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=59112dd7-ac18-43e9-be42-cf3820e7b385 name=/runtime.v1.ImageService/PullImage
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.237656183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c47d5b72-6a35-4eb4-8e5d-d3ae33d6f8d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.239081454Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b7943304-efb1-42de-9840-1f69756d9eb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.242444459Z" level=info msg="Creating container: default/busybox/busybox" id=fb53299c-977e-4880-bf42-0a9f09c1bf5e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.242605769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.246313177Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.246851815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.276329081Z" level=info msg="Created container 69bc36e36522bff6bf904ce8abe7dca2e7309dec17ef2908bab26cc07fc6d11a: default/busybox/busybox" id=fb53299c-977e-4880-bf42-0a9f09c1bf5e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.277632606Z" level=info msg="Starting container: 69bc36e36522bff6bf904ce8abe7dca2e7309dec17ef2908bab26cc07fc6d11a" id=7244bb0c-8930-417f-8584-a9e9ec7222cb name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:35 default-k8s-diff-port-790012 crio[775]: time="2025-10-26T15:13:35.27997546Z" level=info msg="Started container" PID=1924 containerID=69bc36e36522bff6bf904ce8abe7dca2e7309dec17ef2908bab26cc07fc6d11a description=default/busybox/busybox id=7244bb0c-8930-417f-8584-a9e9ec7222cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=e6fdcce03d1e3d11bcfce0641c90f313935aab8cf6e37846ac3f2dd43d6d339c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	69bc36e36522b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   e6fdcce03d1e3       busybox                                                default
	5258d16e2d40d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   f4cf9905fd937       coredns-66bc5c9577-shw6l                               kube-system
	0e8266e730e65       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   6f1ffea5b8320       storage-provisioner                                    kube-system
	3675af7147d72       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   96ebd20cfdac1       kindnet-7ch5r                                          kube-system
	cd5a93332e6d4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   45a705a2c6a56       kube-proxy-wk2nn                                       kube-system
	1ba1b200df6f2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   09945ef26d5ce       kube-apiserver-default-k8s-diff-port-790012            kube-system
	4cc93732e0cd0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   330ae35ef9f34       kube-scheduler-default-k8s-diff-port-790012            kube-system
	18e82eef9c541       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   bf6612459ad04       etcd-default-k8s-diff-port-790012                      kube-system
	9ad4e9464d836       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   c0c00afefe08d       kube-controller-manager-default-k8s-diff-port-790012   kube-system
	
	
	==> coredns [5258d16e2d40d1013a5e5c71a95cf0dd28f468e0d789b886d5652469cf1c5a17] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44581 - 47284 "HINFO IN 1838032489086040361.3681861599154546113. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.458375763s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-790012
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-790012
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=default-k8s-diff-port-790012
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_13_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:13:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-790012
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:13:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:13:31 +0000   Sun, 26 Oct 2025 15:13:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:13:31 +0000   Sun, 26 Oct 2025 15:13:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:13:31 +0000   Sun, 26 Oct 2025 15:13:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:13:31 +0000   Sun, 26 Oct 2025 15:13:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-790012
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                fc981cf4-4aaf-42bf-b320-22476764867d
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-shw6l                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-790012                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-7ch5r                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-790012             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-790012    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-wk2nn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-790012             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node default-k8s-diff-port-790012 event: Registered Node default-k8s-diff-port-790012 in Controller
	  Normal  NodeReady                12s                kubelet          Node default-k8s-diff-port-790012 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [18e82eef9c5415c8d662d15f67607e4564bd927c6ad37a0815c30a4cc9f32ceb] <==
	{"level":"info","ts":"2025-10-26T15:13:10.126999Z","caller":"traceutil/trace.go:172","msg":"trace[1287574971] transaction","detail":"{read_only:false; response_revision:60; number_of_response:1; }","duration":"134.998715ms","start":"2025-10-26T15:13:09.991973Z","end":"2025-10-26T15:13:10.126972Z","steps":["trace[1287574971] 'process raft request'  (duration: 126.158256ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:13:10.127046Z","caller":"traceutil/trace.go:172","msg":"trace[1055026897] transaction","detail":"{read_only:false; response_revision:62; number_of_response:1; }","duration":"124.274928ms","start":"2025-10-26T15:13:10.002755Z","end":"2025-10-26T15:13:10.127030Z","steps":["trace[1055026897] 'process raft request'  (duration: 124.212316ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:13:10.127175Z","caller":"traceutil/trace.go:172","msg":"trace[1301526602] transaction","detail":"{read_only:false; response_revision:61; number_of_response:1; }","duration":"125.127726ms","start":"2025-10-26T15:13:10.002015Z","end":"2025-10-26T15:13:10.127142Z","steps":["trace[1301526602] 'process raft request'  (duration: 124.916546ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:13:10.323455Z","caller":"traceutil/trace.go:172","msg":"trace[241153247] transaction","detail":"{read_only:false; response_revision:64; number_of_response:1; }","duration":"192.521733ms","start":"2025-10-26T15:13:10.130912Z","end":"2025-10-26T15:13:10.323434Z","steps":["trace[241153247] 'process raft request'  (duration: 133.105326ms)","trace[241153247] 'compare'  (duration: 59.29363ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:13:10.491482Z","caller":"traceutil/trace.go:172","msg":"trace[1578114037] transaction","detail":"{read_only:false; response_revision:66; number_of_response:1; }","duration":"159.628623ms","start":"2025-10-26T15:13:10.331831Z","end":"2025-10-26T15:13:10.491460Z","steps":["trace[1578114037] 'process raft request'  (duration: 131.79514ms)","trace[1578114037] 'compare'  (duration: 27.733157ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:13:10.659146Z","caller":"traceutil/trace.go:172","msg":"trace[1794355020] linearizableReadLoop","detail":"{readStateIndex:71; appliedIndex:71; }","duration":"121.540146ms","start":"2025-10-26T15:13:10.537578Z","end":"2025-10-26T15:13:10.659118Z","steps":["trace[1794355020] 'read index received'  (duration: 121.529883ms)","trace[1794355020] 'applied index is now lower than readState.Index'  (duration: 8.653µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:13:10.683079Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.471773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-26T15:13:10.683198Z","caller":"traceutil/trace.go:172","msg":"trace[1922694663] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:0; response_revision:67; }","duration":"145.606537ms","start":"2025-10-26T15:13:10.537574Z","end":"2025-10-26T15:13:10.683181Z","steps":["trace[1922694663] 'agreement among raft nodes before linearized reading'  (duration: 121.687778ms)","trace[1922694663] 'range keys from in-memory index tree'  (duration: 23.73595ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:13:10.683270Z","caller":"traceutil/trace.go:172","msg":"trace[1824926706] transaction","detail":"{read_only:false; response_revision:68; number_of_response:1; }","duration":"147.414904ms","start":"2025-10-26T15:13:10.535819Z","end":"2025-10-26T15:13:10.683234Z","steps":["trace[1824926706] 'process raft request'  (duration: 123.37925ms)","trace[1824926706] 'compare'  (duration: 23.834541ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:13:10.800409Z","caller":"traceutil/trace.go:172","msg":"trace[1934833586] linearizableReadLoop","detail":"{readStateIndex:72; appliedIndex:72; }","duration":"112.777214ms","start":"2025-10-26T15:13:10.687609Z","end":"2025-10-26T15:13:10.800387Z","steps":["trace[1934833586] 'read index received'  (duration: 112.768444ms)","trace[1934833586] 'applied index is now lower than readState.Index'  (duration: 7.101µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:13:10.928334Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.697992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-26T15:13:10.928439Z","caller":"traceutil/trace.go:172","msg":"trace[1102603396] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:0; response_revision:68; }","duration":"240.805418ms","start":"2025-10-26T15:13:10.687602Z","end":"2025-10-26T15:13:10.928408Z","steps":["trace[1102603396] 'agreement among raft nodes before linearized reading'  (duration: 112.860928ms)","trace[1102603396] 'range keys from in-memory index tree'  (duration: 127.803195ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:13:10.928935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.998304ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596637432429789 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-cluster-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-cluster-critical\" value_size:407 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-10-26T15:13:10.929036Z","caller":"traceutil/trace.go:172","msg":"trace[1528362468] transaction","detail":"{read_only:false; response_revision:69; number_of_response:1; }","duration":"242.294818ms","start":"2025-10-26T15:13:10.686716Z","end":"2025-10-26T15:13:10.929011Z","steps":["trace[1528362468] 'process raft request'  (duration: 113.707475ms)","trace[1528362468] 'compare'  (duration: 127.861305ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:13:11.074483Z","caller":"traceutil/trace.go:172","msg":"trace[318765928] linearizableReadLoop","detail":"{readStateIndex:73; appliedIndex:73; }","duration":"141.059738ms","start":"2025-10-26T15:13:10.933380Z","end":"2025-10-26T15:13:11.074440Z","steps":["trace[318765928] 'read index received'  (duration: 141.05098ms)","trace[318765928] 'applied index is now lower than readState.Index'  (duration: 7.265µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:13:11.106263Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"172.857304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:discovery\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-26T15:13:11.106323Z","caller":"traceutil/trace.go:172","msg":"trace[412759952] range","detail":"{range_begin:/registry/clusterrolebindings/system:discovery; range_end:; response_count:0; response_revision:69; }","duration":"172.93344ms","start":"2025-10-26T15:13:10.933377Z","end":"2025-10-26T15:13:11.106310Z","steps":["trace[412759952] 'agreement among raft nodes before linearized reading'  (duration: 141.138926ms)","trace[412759952] 'range keys from in-memory index tree'  (duration: 31.684193ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:13:11.106503Z","caller":"traceutil/trace.go:172","msg":"trace[396963624] transaction","detail":"{read_only:false; response_revision:70; number_of_response:1; }","duration":"173.170933ms","start":"2025-10-26T15:13:10.933312Z","end":"2025-10-26T15:13:11.106483Z","steps":["trace[396963624] 'process raft request'  (duration: 141.176407ms)","trace[396963624] 'compare'  (duration: 31.73031ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:13:11.463005Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.757503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:discovery\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-26T15:13:11.463087Z","caller":"traceutil/trace.go:172","msg":"trace[454041865] range","detail":"{range_begin:/registry/clusterroles/system:discovery; range_end:; response_count:0; response_revision:73; }","duration":"266.844166ms","start":"2025-10-26T15:13:11.196212Z","end":"2025-10-26T15:13:11.463056Z","steps":["trace[454041865] 'range keys from in-memory index tree'  (duration: 255.043205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:13:11.463114Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.12201ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596637432429801 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-790012.18721343f2f443c9\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-790012.18721343f2f443c9\" value_size:690 lease:499224600577653986 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-10-26T15:13:11.463202Z","caller":"traceutil/trace.go:172","msg":"trace[823736616] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"267.532655ms","start":"2025-10-26T15:13:11.195657Z","end":"2025-10-26T15:13:11.463190Z","steps":["trace[823736616] 'process raft request'  (duration: 12.277135ms)","trace[823736616] 'compare'  (duration: 255.005772ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:13:11.595811Z","caller":"traceutil/trace.go:172","msg":"trace[1916141060] transaction","detail":"{read_only:false; response_revision:76; number_of_response:1; }","duration":"129.367172ms","start":"2025-10-26T15:13:11.466428Z","end":"2025-10-26T15:13:11.595795Z","steps":["trace[1916141060] 'process raft request'  (duration: 123.796239ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:13:11.812471Z","caller":"traceutil/trace.go:172","msg":"trace[1720268158] transaction","detail":"{read_only:false; response_revision:79; number_of_response:1; }","duration":"209.621331ms","start":"2025-10-26T15:13:11.602835Z","end":"2025-10-26T15:13:11.812456Z","steps":["trace[1720268158] 'process raft request'  (duration: 209.569566ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:13:11.812476Z","caller":"traceutil/trace.go:172","msg":"trace[1737453740] transaction","detail":"{read_only:false; response_revision:78; number_of_response:1; }","duration":"211.013011ms","start":"2025-10-26T15:13:11.601435Z","end":"2025-10-26T15:13:11.812448Z","steps":["trace[1737453740] 'process raft request'  (duration: 146.722046ms)","trace[1737453740] 'compare'  (duration: 64.100658ms)"],"step_count":2}
	
	
	==> kernel <==
	 15:13:43 up  2:56,  0 user,  load average: 3.36, 2.70, 1.81
	Linux default-k8s-diff-port-790012 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3675af7147d7241a2567c7495669b6e066c33d90a0b7ec722c828eb37a1578e0] <==
	I1026 15:13:20.502339       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:13:20.502938       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:13:20.503088       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:13:20.503103       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:13:20.503122       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:13:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:13:20.704634       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:13:20.704853       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:13:20.704872       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:13:20.705772       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:13:21.093996       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:13:21.094032       1 metrics.go:72] Registering metrics
	I1026 15:13:21.094116       1 controller.go:711] "Syncing nftables rules"
	I1026 15:13:30.709267       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:13:30.709326       1 main.go:301] handling current node
	I1026 15:13:40.707358       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:13:40.707398       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ba1b200df6f27fe1c3c6aaebbdbd0f19b2d3b0048cc2596f619dca356f03608] <==
	I1026 15:13:09.616149       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:13:09.616212       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:13:09.616983       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:13:09.617009       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:09.624917       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:09.626189       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:13:09.649547       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:13:09.659282       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:13:10.684221       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:13:10.929913       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:13:10.929937       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:13:12.720971       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:13:12.805881       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:13:12.929860       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:13:12.937760       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1026 15:13:12.939522       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:13:12.945198       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:13:13.532470       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:13:13.961336       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:13:13.982585       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:13:13.998178       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:13:19.185711       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:19.190028       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:19.284340       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:13:19.595571       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9ad4e9464d836429abac9b30e52be82ee576774471bea9490e76e3a7344a82ad] <==
	I1026 15:13:18.530090       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 15:13:18.530217       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:13:18.530340       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:13:18.530430       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:13:18.530441       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-790012"
	I1026 15:13:18.530732       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 15:13:18.530767       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:13:18.530958       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 15:13:18.531156       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 15:13:18.531733       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:13:18.531800       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:13:18.532204       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:13:18.532414       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:13:18.532544       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:13:18.532613       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:13:18.533009       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:13:18.533047       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:13:18.533075       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:13:18.534906       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 15:13:18.535665       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:13:18.541863       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:13:18.541868       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 15:13:18.554193       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:13:18.567645       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:13:33.533358       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cd5a93332e6d40d4414efd7f7db7fd185ea83f402d8a9cb24cebd359741d3023] <==
	I1026 15:13:20.299270       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:13:20.364794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:13:20.465314       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:13:20.465395       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:13:20.465489       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:13:20.485832       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:13:20.485896       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:13:20.491364       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:13:20.491751       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:13:20.491777       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:20.493384       1 config.go:309] "Starting node config controller"
	I1026 15:13:20.493451       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:13:20.493511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:13:20.493663       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:13:20.493699       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:13:20.493714       1 config.go:200] "Starting service config controller"
	I1026 15:13:20.493837       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:13:20.493734       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:13:20.493918       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:13:20.593931       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:13:20.593949       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:13:20.593984       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4cc93732e0cd01c07c4adf5af77de2c11d3e3106580a98f3e75b69fdc211717d] <==
	E1026 15:13:09.570444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:13:09.570451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:13:09.570493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:13:10.380450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:13:10.455967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:13:10.539622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:13:10.643769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:13:10.728440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:13:10.807422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:13:10.852776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:13:10.887760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:13:10.940071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 15:13:10.967768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:13:10.981992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:13:11.007463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:13:11.019714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:13:11.034266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:13:11.108903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:13:11.111934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:13:11.121249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:13:11.137702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:13:11.140806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:13:12.214103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:13:12.281451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1026 15:13:12.865975       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:19.369644    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/928b7499-0464-4469-9f74-0e72935a8464-xtables-lock\") pod \"kube-proxy-wk2nn\" (UID: \"928b7499-0464-4469-9f74-0e72935a8464\") " pod="kube-system/kube-proxy-wk2nn"
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:19.369697    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17-lib-modules\") pod \"kindnet-7ch5r\" (UID: \"54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17\") " pod="kube-system/kindnet-7ch5r"
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:19.369778    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17-cni-cfg\") pod \"kindnet-7ch5r\" (UID: \"54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17\") " pod="kube-system/kindnet-7ch5r"
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:19.369843    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/928b7499-0464-4469-9f74-0e72935a8464-kube-proxy\") pod \"kube-proxy-wk2nn\" (UID: \"928b7499-0464-4469-9f74-0e72935a8464\") " pod="kube-system/kube-proxy-wk2nn"
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:19.369866    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/928b7499-0464-4469-9f74-0e72935a8464-lib-modules\") pod \"kube-proxy-wk2nn\" (UID: \"928b7499-0464-4469-9f74-0e72935a8464\") " pod="kube-system/kube-proxy-wk2nn"
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:19.369891    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84nk\" (UniqueName: \"kubernetes.io/projected/928b7499-0464-4469-9f74-0e72935a8464-kube-api-access-v84nk\") pod \"kube-proxy-wk2nn\" (UID: \"928b7499-0464-4469-9f74-0e72935a8464\") " pod="kube-system/kube-proxy-wk2nn"
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:19.369921    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lbq8\" (UniqueName: \"kubernetes.io/projected/54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17-kube-api-access-9lbq8\") pod \"kindnet-7ch5r\" (UID: \"54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17\") " pod="kube-system/kindnet-7ch5r"
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:19.369947    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17-xtables-lock\") pod \"kindnet-7ch5r\" (UID: \"54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17\") " pod="kube-system/kindnet-7ch5r"
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: E1026 15:13:19.477782    1314 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: E1026 15:13:19.477826    1314 projected.go:196] Error preparing data for projected volume kube-api-access-v84nk for pod kube-system/kube-proxy-wk2nn: configmap "kube-root-ca.crt" not found
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: E1026 15:13:19.477781    1314 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: E1026 15:13:19.477916    1314 projected.go:196] Error preparing data for projected volume kube-api-access-9lbq8 for pod kube-system/kindnet-7ch5r: configmap "kube-root-ca.crt" not found
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: E1026 15:13:19.477926    1314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928b7499-0464-4469-9f74-0e72935a8464-kube-api-access-v84nk podName:928b7499-0464-4469-9f74-0e72935a8464 nodeName:}" failed. No retries permitted until 2025-10-26 15:13:19.977885228 +0000 UTC m=+6.238119111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v84nk" (UniqueName: "kubernetes.io/projected/928b7499-0464-4469-9f74-0e72935a8464-kube-api-access-v84nk") pod "kube-proxy-wk2nn" (UID: "928b7499-0464-4469-9f74-0e72935a8464") : configmap "kube-root-ca.crt" not found
	Oct 26 15:13:19 default-k8s-diff-port-790012 kubelet[1314]: E1026 15:13:19.477962    1314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17-kube-api-access-9lbq8 podName:54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17 nodeName:}" failed. No retries permitted until 2025-10-26 15:13:19.977944929 +0000 UTC m=+6.238178808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9lbq8" (UniqueName: "kubernetes.io/projected/54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17-kube-api-access-9lbq8") pod "kindnet-7ch5r" (UID: "54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17") : configmap "kube-root-ca.crt" not found
	Oct 26 15:13:20 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:20.935887    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7ch5r" podStartSLOduration=1.9358604000000001 podStartE2EDuration="1.9358604s" podCreationTimestamp="2025-10-26 15:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:20.914457346 +0000 UTC m=+7.174691228" watchObservedRunningTime="2025-10-26 15:13:20.9358604 +0000 UTC m=+7.196094282"
	Oct 26 15:13:22 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:22.515005    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wk2nn" podStartSLOduration=3.5149805389999997 podStartE2EDuration="3.514980539s" podCreationTimestamp="2025-10-26 15:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:20.938480839 +0000 UTC m=+7.198714721" watchObservedRunningTime="2025-10-26 15:13:22.514980539 +0000 UTC m=+8.775214420"
	Oct 26 15:13:31 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:31.100282    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 15:13:31 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:31.159773    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fghzm\" (UniqueName: \"kubernetes.io/projected/1f95e80f-9f93-44c4-b761-fd518de0c4d9-kube-api-access-fghzm\") pod \"storage-provisioner\" (UID: \"1f95e80f-9f93-44c4-b761-fd518de0c4d9\") " pod="kube-system/storage-provisioner"
	Oct 26 15:13:31 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:31.159824    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1f95e80f-9f93-44c4-b761-fd518de0c4d9-tmp\") pod \"storage-provisioner\" (UID: \"1f95e80f-9f93-44c4-b761-fd518de0c4d9\") " pod="kube-system/storage-provisioner"
	Oct 26 15:13:31 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:31.159918    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34b47d5d-504d-4f7a-905e-acd0787bad18-config-volume\") pod \"coredns-66bc5c9577-shw6l\" (UID: \"34b47d5d-504d-4f7a-905e-acd0787bad18\") " pod="kube-system/coredns-66bc5c9577-shw6l"
	Oct 26 15:13:31 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:31.159966    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xn75\" (UniqueName: \"kubernetes.io/projected/34b47d5d-504d-4f7a-905e-acd0787bad18-kube-api-access-9xn75\") pod \"coredns-66bc5c9577-shw6l\" (UID: \"34b47d5d-504d-4f7a-905e-acd0787bad18\") " pod="kube-system/coredns-66bc5c9577-shw6l"
	Oct 26 15:13:31 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:31.961081    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-shw6l" podStartSLOduration=12.961057247 podStartE2EDuration="12.961057247s" podCreationTimestamp="2025-10-26 15:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:31.960517893 +0000 UTC m=+18.220751775" watchObservedRunningTime="2025-10-26 15:13:31.961057247 +0000 UTC m=+18.221291129"
	Oct 26 15:13:31 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:31.990003    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.989977894999999 podStartE2EDuration="11.989977895s" podCreationTimestamp="2025-10-26 15:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:13:31.989445263 +0000 UTC m=+18.249679144" watchObservedRunningTime="2025-10-26 15:13:31.989977895 +0000 UTC m=+18.250211778"
	Oct 26 15:13:34 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:34.180687    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmfjj\" (UniqueName: \"kubernetes.io/projected/e7ac2f4d-99cd-4d92-9325-8c4c3b4aeac4-kube-api-access-kmfjj\") pod \"busybox\" (UID: \"e7ac2f4d-99cd-4d92-9325-8c4c3b4aeac4\") " pod="default/busybox"
	Oct 26 15:13:35 default-k8s-diff-port-790012 kubelet[1314]: I1026 15:13:35.969254    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.186416536 podStartE2EDuration="1.969229566s" podCreationTimestamp="2025-10-26 15:13:34 +0000 UTC" firstStartedPulling="2025-10-26 15:13:34.455708926 +0000 UTC m=+20.715942787" lastFinishedPulling="2025-10-26 15:13:35.238521936 +0000 UTC m=+21.498755817" observedRunningTime="2025-10-26 15:13:35.968828735 +0000 UTC m=+22.229062626" watchObservedRunningTime="2025-10-26 15:13:35.969229566 +0000 UTC m=+22.229463448"
	
	
	==> storage-provisioner [0e8266e730e6508504423215893b2750dd77c1b019e4b256d8496e25d73f17c3] <==
	I1026 15:13:31.493093       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:13:31.505005       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:13:31.505065       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:13:31.514488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:31.521829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:13:31.522075       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:13:31.522290       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-790012_93e9e44c-6ea3-4bc3-848a-0b21b732dc69!
	I1026 15:13:31.522318       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0ccb008-188e-4240-a93f-ef906d571508", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-790012_93e9e44c-6ea3-4bc3-848a-0b21b732dc69 became leader
	W1026 15:13:31.525958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:31.532151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:13:31.622898       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-790012_93e9e44c-6ea3-4bc3-848a-0b21b732dc69!
	W1026 15:13:33.535735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:33.540199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:35.544369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:35.549054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:37.552619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:37.557210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:39.561822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:39.567232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:41.578488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:13:41.596584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-790012 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-450976 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-450976 --alsologtostderr -v=1: exit status 80 (2.437385333s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-450976 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:13:45.350215 1119320 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:13:45.350513 1119320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:45.350531 1119320 out.go:374] Setting ErrFile to fd 2...
	I1026 15:13:45.350537 1119320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:45.350829 1119320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:13:45.351192 1119320 out.go:368] Setting JSON to false
	I1026 15:13:45.351255 1119320 mustload.go:65] Loading cluster: newest-cni-450976
	I1026 15:13:45.351789 1119320 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:45.352376 1119320 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:45.378953 1119320 host.go:66] Checking if "newest-cni-450976" exists ...
	I1026 15:13:45.379322 1119320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:45.476662 1119320 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:84 OomKillDisable:false NGoroutines:98 SystemTime:2025-10-26 15:13:45.459380969 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:45.477581 1119320 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-450976 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:13:45.481428 1119320 out.go:179] * Pausing node newest-cni-450976 ... 
	I1026 15:13:45.482768 1119320 host.go:66] Checking if "newest-cni-450976" exists ...
	I1026 15:13:45.483136 1119320 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:45.483223 1119320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:45.512022 1119320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:45.628244 1119320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:45.646135 1119320 pause.go:52] kubelet running: true
	I1026 15:13:45.646274 1119320 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:13:45.843702 1119320 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:13:45.843836 1119320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:13:45.930426 1119320 cri.go:89] found id: "4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9"
	I1026 15:13:45.930456 1119320 cri.go:89] found id: "28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80"
	I1026 15:13:45.930462 1119320 cri.go:89] found id: "d301b19a9754fef9062ff0ab32cef39843a3b341f9c9c9c979ce50772e060f34"
	I1026 15:13:45.930466 1119320 cri.go:89] found id: "eca31c4960e5fee40ff7a27e80d78ba23e050229040a9c119c1a39d6d964c134"
	I1026 15:13:45.930471 1119320 cri.go:89] found id: "7b4821416cdb1f5a1c75031b5a1a9853efa078e8f2964c61061e443a8fe518d0"
	I1026 15:13:45.930476 1119320 cri.go:89] found id: "dad7b5a044afb9affbe248c4fce4bf89b73634fb0298fd50fe83199eecb4779f"
	I1026 15:13:45.930480 1119320 cri.go:89] found id: ""
	I1026 15:13:45.930551 1119320 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:13:45.945246 1119320 retry.go:31] will retry after 159.934595ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:45Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:46.105722 1119320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:46.119415 1119320 pause.go:52] kubelet running: false
	I1026 15:13:46.119481 1119320 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:13:46.279371 1119320 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:13:46.279470 1119320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:13:46.365519 1119320 cri.go:89] found id: "4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9"
	I1026 15:13:46.365545 1119320 cri.go:89] found id: "28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80"
	I1026 15:13:46.365551 1119320 cri.go:89] found id: "d301b19a9754fef9062ff0ab32cef39843a3b341f9c9c9c979ce50772e060f34"
	I1026 15:13:46.365556 1119320 cri.go:89] found id: "eca31c4960e5fee40ff7a27e80d78ba23e050229040a9c119c1a39d6d964c134"
	I1026 15:13:46.365560 1119320 cri.go:89] found id: "7b4821416cdb1f5a1c75031b5a1a9853efa078e8f2964c61061e443a8fe518d0"
	I1026 15:13:46.365564 1119320 cri.go:89] found id: "dad7b5a044afb9affbe248c4fce4bf89b73634fb0298fd50fe83199eecb4779f"
	I1026 15:13:46.365568 1119320 cri.go:89] found id: ""
	I1026 15:13:46.365616 1119320 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:13:46.382884 1119320 retry.go:31] will retry after 449.497227ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:46Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:46.833631 1119320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:46.850427 1119320 pause.go:52] kubelet running: false
	I1026 15:13:46.850507 1119320 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:13:46.990093 1119320 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:13:46.990206 1119320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:13:47.065499 1119320 cri.go:89] found id: "4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9"
	I1026 15:13:47.065522 1119320 cri.go:89] found id: "28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80"
	I1026 15:13:47.065527 1119320 cri.go:89] found id: "d301b19a9754fef9062ff0ab32cef39843a3b341f9c9c9c979ce50772e060f34"
	I1026 15:13:47.065532 1119320 cri.go:89] found id: "eca31c4960e5fee40ff7a27e80d78ba23e050229040a9c119c1a39d6d964c134"
	I1026 15:13:47.065536 1119320 cri.go:89] found id: "7b4821416cdb1f5a1c75031b5a1a9853efa078e8f2964c61061e443a8fe518d0"
	I1026 15:13:47.065540 1119320 cri.go:89] found id: "dad7b5a044afb9affbe248c4fce4bf89b73634fb0298fd50fe83199eecb4779f"
	I1026 15:13:47.065544 1119320 cri.go:89] found id: ""
	I1026 15:13:47.065594 1119320 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:13:47.079311 1119320 retry.go:31] will retry after 389.875529ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:47Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:47.469960 1119320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:47.483772 1119320 pause.go:52] kubelet running: false
	I1026 15:13:47.483833 1119320 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:13:47.598752 1119320 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:13:47.598844 1119320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:13:47.676690 1119320 cri.go:89] found id: "4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9"
	I1026 15:13:47.676712 1119320 cri.go:89] found id: "28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80"
	I1026 15:13:47.676715 1119320 cri.go:89] found id: "d301b19a9754fef9062ff0ab32cef39843a3b341f9c9c9c979ce50772e060f34"
	I1026 15:13:47.676718 1119320 cri.go:89] found id: "eca31c4960e5fee40ff7a27e80d78ba23e050229040a9c119c1a39d6d964c134"
	I1026 15:13:47.676721 1119320 cri.go:89] found id: "7b4821416cdb1f5a1c75031b5a1a9853efa078e8f2964c61061e443a8fe518d0"
	I1026 15:13:47.676724 1119320 cri.go:89] found id: "dad7b5a044afb9affbe248c4fce4bf89b73634fb0298fd50fe83199eecb4779f"
	I1026 15:13:47.676726 1119320 cri.go:89] found id: ""
	I1026 15:13:47.676762 1119320 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:13:47.693019 1119320 out.go:203] 
	W1026 15:13:47.694387 1119320 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:13:47.694408 1119320 out.go:285] * 
	* 
	W1026 15:13:47.699848 1119320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:13:47.701370 1119320 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-450976 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-450976
helpers_test.go:243: (dbg) docker inspect newest-cni-450976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916",
	        "Created": "2025-10-26T15:12:59.003317793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1114953,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:13:33.584353704Z",
	            "FinishedAt": "2025-10-26T15:13:32.658442501Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/hostname",
	        "HostsPath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/hosts",
	        "LogPath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916-json.log",
	        "Name": "/newest-cni-450976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-450976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-450976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916",
	                "LowerDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-450976",
	                "Source": "/var/lib/docker/volumes/newest-cni-450976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-450976",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-450976",
	                "name.minikube.sigs.k8s.io": "newest-cni-450976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7e9411a676419dfeb2cd6927394356cb760dfa197e267d653bc022dbcacc23d",
	            "SandboxKey": "/var/run/docker/netns/e7e9411a6764",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33868"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33869"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33870"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-450976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:7c:1a:c5:4a:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4254446822c371d2067f0edad3ee1d5a391333ca11c0b013055abf6c85fb5682",
	                    "EndpointID": "12e5838078d6af1936af6d1081db262ef67ea3f1e7a35721b11fe8ff0cc0a8d1",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-450976",
	                        "780b6ec8823b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-450976 -n newest-cni-450976
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-450976 -n newest-cni-450976: exit status 2 (376.339938ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-450976 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-450976 logs -n 25: (1.237487697s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-330914 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ image   │ no-preload-475081 image list --format=json                                                                                                                                                                                                    │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p no-preload-475081 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p disable-driver-mounts-619402                                                                                                                                                                                                               │ disable-driver-mounts-619402 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p cert-expiration-619245 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ delete  │ -p cert-expiration-619245                                                                                                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p auto-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-498531                  │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-535130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p embed-certs-535130 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p newest-cni-450976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p newest-cni-450976 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-535130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-450976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-790012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-790012 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ image   │ newest-cni-450976 image list --format=json                                                                                                                                                                                                    │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ pause   │ -p newest-cni-450976 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:13:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:13:33.334804 1114752 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:13:33.335030 1114752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:33.335037 1114752 out.go:374] Setting ErrFile to fd 2...
	I1026 15:13:33.335041 1114752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:33.335275 1114752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:13:33.335717 1114752 out.go:368] Setting JSON to false
	I1026 15:13:33.336864 1114752 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10561,"bootTime":1761481052,"procs":382,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:13:33.336965 1114752 start.go:141] virtualization: kvm guest
	I1026 15:13:33.338732 1114752 out.go:179] * [newest-cni-450976] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:13:33.340086 1114752 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:13:33.340115 1114752 notify.go:220] Checking for updates...
	I1026 15:13:33.342297 1114752 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:13:33.343663 1114752 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:33.344846 1114752 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:13:33.346031 1114752 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:13:33.347279 1114752 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:13:33.349221 1114752 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:33.349915 1114752 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:13:33.376031 1114752 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:13:33.376129 1114752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:33.438088 1114752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-26 15:13:33.426631481 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:33.438228 1114752 docker.go:318] overlay module found
	I1026 15:13:33.440047 1114752 out.go:179] * Using the docker driver based on existing profile
	I1026 15:13:33.441532 1114752 start.go:305] selected driver: docker
	I1026 15:13:33.441548 1114752 start.go:925] validating driver "docker" against &{Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:33.441657 1114752 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:13:33.442266 1114752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:33.505289 1114752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-26 15:13:33.494889004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:33.505603 1114752 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:13:33.505638 1114752 cni.go:84] Creating CNI manager for ""
	I1026 15:13:33.505687 1114752 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:33.505724 1114752 start.go:349] cluster config:
	{Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:33.508668 1114752 out.go:179] * Starting "newest-cni-450976" primary control-plane node in "newest-cni-450976" cluster
	I1026 15:13:33.510071 1114752 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:13:33.511479 1114752 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:13:33.512708 1114752 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:33.512753 1114752 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:13:33.512777 1114752 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:13:33.512801 1114752 cache.go:58] Caching tarball of preloaded images
	I1026 15:13:33.512888 1114752 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:13:33.512898 1114752 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:13:33.512995 1114752 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/config.json ...
	I1026 15:13:33.534783 1114752 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:13:33.534810 1114752 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:13:33.534834 1114752 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:13:33.534873 1114752 start.go:360] acquireMachinesLock for newest-cni-450976: {Name:mkd25f5c88d69734bd3a1425b2ee7adeba19f996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:13:33.534945 1114752 start.go:364] duration metric: took 46.831µs to acquireMachinesLock for "newest-cni-450976"
	I1026 15:13:33.534970 1114752 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:13:33.534980 1114752 fix.go:54] fixHost starting: 
	I1026 15:13:33.535289 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:33.554995 1114752 fix.go:112] recreateIfNeeded on newest-cni-450976: state=Stopped err=<nil>
	W1026 15:13:33.555041 1114752 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:13:31.934443 1100384 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:31.934495 1100384 system_pods.go:89] "coredns-66bc5c9577-shw6l" [34b47d5d-504d-4f7a-905e-acd0787bad18] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:31.934506 1100384 system_pods.go:89] "etcd-default-k8s-diff-port-790012" [18a43e2a-b91b-4b24-a5f6-4ce939ee4840] Running
	I1026 15:13:31.934515 1100384 system_pods.go:89] "kindnet-7ch5r" [54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17] Running
	I1026 15:13:31.934521 1100384 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-790012" [cdf846a0-22e6-4261-abdc-bd5f72bdbc80] Running
	I1026 15:13:31.934528 1100384 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-790012" [4e9cad9b-4439-4d70-98c2-10b7fcd16c25] Running
	I1026 15:13:31.934533 1100384 system_pods.go:89] "kube-proxy-wk2nn" [928b7499-0464-4469-9f74-0e72935a8464] Running
	I1026 15:13:31.934539 1100384 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-790012" [80d7b5ad-decf-4b5f-a03f-4f63aed757a1] Running
	I1026 15:13:31.934547 1100384 system_pods.go:89] "storage-provisioner" [1f95e80f-9f93-44c4-b761-fd518de0c4d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:31.934574 1100384 retry.go:31] will retry after 343.738043ms: missing components: kube-dns
	I1026 15:13:32.282807 1100384 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:32.282849 1100384 system_pods.go:89] "coredns-66bc5c9577-shw6l" [34b47d5d-504d-4f7a-905e-acd0787bad18] Running
	I1026 15:13:32.282858 1100384 system_pods.go:89] "etcd-default-k8s-diff-port-790012" [18a43e2a-b91b-4b24-a5f6-4ce939ee4840] Running
	I1026 15:13:32.282865 1100384 system_pods.go:89] "kindnet-7ch5r" [54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17] Running
	I1026 15:13:32.282871 1100384 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-790012" [cdf846a0-22e6-4261-abdc-bd5f72bdbc80] Running
	I1026 15:13:32.282875 1100384 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-790012" [4e9cad9b-4439-4d70-98c2-10b7fcd16c25] Running
	I1026 15:13:32.282878 1100384 system_pods.go:89] "kube-proxy-wk2nn" [928b7499-0464-4469-9f74-0e72935a8464] Running
	I1026 15:13:32.282881 1100384 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-790012" [80d7b5ad-decf-4b5f-a03f-4f63aed757a1] Running
	I1026 15:13:32.282886 1100384 system_pods.go:89] "storage-provisioner" [1f95e80f-9f93-44c4-b761-fd518de0c4d9] Running
	I1026 15:13:32.282897 1100384 system_pods.go:126] duration metric: took 891.5938ms to wait for k8s-apps to be running ...
	I1026 15:13:32.282914 1100384 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:13:32.282969 1100384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:32.296468 1100384 system_svc.go:56] duration metric: took 13.54263ms WaitForService to wait for kubelet
	I1026 15:13:32.296504 1100384 kubeadm.go:586] duration metric: took 12.759102603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:32.296526 1100384 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:32.299850 1100384 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:13:32.299878 1100384 node_conditions.go:123] node cpu capacity is 8
	I1026 15:13:32.299894 1100384 node_conditions.go:105] duration metric: took 3.363088ms to run NodePressure ...
	I1026 15:13:32.299907 1100384 start.go:241] waiting for startup goroutines ...
	I1026 15:13:32.299914 1100384 start.go:246] waiting for cluster config update ...
	I1026 15:13:32.299924 1100384 start.go:255] writing updated cluster config ...
	I1026 15:13:32.300234 1100384 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:32.304335 1100384 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:32.307473 1100384 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-shw6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.312359 1100384 pod_ready.go:94] pod "coredns-66bc5c9577-shw6l" is "Ready"
	I1026 15:13:32.312384 1100384 pod_ready.go:86] duration metric: took 4.8862ms for pod "coredns-66bc5c9577-shw6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.314462 1100384 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.318273 1100384 pod_ready.go:94] pod "etcd-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:32.318297 1100384 pod_ready.go:86] duration metric: took 3.808174ms for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.320377 1100384 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.324152 1100384 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:32.324183 1100384 pod_ready.go:86] duration metric: took 3.787572ms for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.325956 1100384 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.708420 1100384 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:32.708454 1100384 pod_ready.go:86] duration metric: took 382.476768ms for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.908230 1100384 pod_ready.go:83] waiting for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.308766 1100384 pod_ready.go:94] pod "kube-proxy-wk2nn" is "Ready"
	I1026 15:13:33.308793 1100384 pod_ready.go:86] duration metric: took 400.537302ms for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.509496 1100384 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.908459 1100384 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:33.908489 1100384 pod_ready.go:86] duration metric: took 398.969559ms for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.908501 1100384 pod_ready.go:40] duration metric: took 1.604136935s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:33.958143 1100384 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:13:33.963345 1100384 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-790012" cluster and "default" namespace by default
	I1026 15:13:31.698014 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:32.198139 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:32.698359 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:33.197334 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:33.697439 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:34.198261 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:34.697736 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:34.769063 1107827 kubeadm.go:1113] duration metric: took 4.175520669s to wait for elevateKubeSystemPrivileges
	I1026 15:13:34.769102 1107827 kubeadm.go:402] duration metric: took 16.397307608s to StartCluster
	I1026 15:13:34.769127 1107827 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:34.769225 1107827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:34.770585 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:34.770908 1107827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:13:34.770943 1107827 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:34.770916 1107827 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:34.771042 1107827 addons.go:69] Setting default-storageclass=true in profile "auto-498531"
	I1026 15:13:34.771064 1107827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-498531"
	I1026 15:13:34.771123 1107827 config.go:182] Loaded profile config "auto-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:34.771035 1107827 addons.go:69] Setting storage-provisioner=true in profile "auto-498531"
	I1026 15:13:34.771231 1107827 addons.go:238] Setting addon storage-provisioner=true in "auto-498531"
	I1026 15:13:34.771262 1107827 host.go:66] Checking if "auto-498531" exists ...
	I1026 15:13:34.771577 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:34.771772 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:34.776443 1107827 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:34.777765 1107827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:34.799695 1107827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:34.799746 1107827 addons.go:238] Setting addon default-storageclass=true in "auto-498531"
	I1026 15:13:34.799799 1107827 host.go:66] Checking if "auto-498531" exists ...
	I1026 15:13:34.800417 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:34.801236 1107827 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:34.801257 1107827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:34.801312 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:34.829194 1107827 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:34.829225 1107827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:34.829294 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:34.832250 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:34.854598 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:34.869079 1107827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:13:34.927602 1107827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:34.952284 1107827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:34.970067 1107827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:35.045061 1107827 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1026 15:13:35.046332 1107827 node_ready.go:35] waiting up to 15m0s for node "auto-498531" to be "Ready" ...
	I1026 15:13:35.313398 1107827 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:13:31.389560 1113766 out.go:252] * Restarting existing docker container for "embed-certs-535130" ...
	I1026 15:13:31.389635 1113766 cli_runner.go:164] Run: docker start embed-certs-535130
	I1026 15:13:31.660384 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:31.678708 1113766 kic.go:430] container "embed-certs-535130" state is running.
	I1026 15:13:31.679126 1113766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:13:31.697538 1113766 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json ...
	I1026 15:13:31.697945 1113766 machine.go:93] provisionDockerMachine start ...
	I1026 15:13:31.698059 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:31.718923 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:31.719190 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:31.719210 1113766 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:13:31.720103 1113766 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48182->127.0.0.1:33862: read: connection reset by peer
	I1026 15:13:34.882840 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:13:34.882880 1113766 ubuntu.go:182] provisioning hostname "embed-certs-535130"
	I1026 15:13:34.882953 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:34.905938 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:34.906301 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:34.906322 1113766 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-535130 && echo "embed-certs-535130" | sudo tee /etc/hostname
	I1026 15:13:35.076003 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:13:35.076117 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:35.103101 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:35.103427 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:35.103450 1113766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-535130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-535130/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-535130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:13:35.256045 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:13:35.256087 1113766 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:13:35.256116 1113766 ubuntu.go:190] setting up certificates
	I1026 15:13:35.256132 1113766 provision.go:84] configureAuth start
	I1026 15:13:35.256217 1113766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:13:35.279777 1113766 provision.go:143] copyHostCerts
	I1026 15:13:35.279863 1113766 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:13:35.279881 1113766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:13:35.279958 1113766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:13:35.280106 1113766 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:13:35.280124 1113766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:13:35.280197 1113766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:13:35.280306 1113766 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:13:35.280314 1113766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:13:35.280352 1113766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:13:35.280449 1113766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.embed-certs-535130 san=[127.0.0.1 192.168.76.2 embed-certs-535130 localhost minikube]
	I1026 15:13:35.849277 1113766 provision.go:177] copyRemoteCerts
	I1026 15:13:35.849339 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:13:35.849383 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:35.868289 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:35.970503 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 15:13:35.989799 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:13:36.009306 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:13:36.028885 1113766 provision.go:87] duration metric: took 772.732042ms to configureAuth
	I1026 15:13:36.028916 1113766 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:13:36.029146 1113766 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:36.029314 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.049872 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:36.050196 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:36.050225 1113766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:13:35.314522 1107827 addons.go:514] duration metric: took 543.574864ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:13:35.550021 1107827 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-498531" context rescaled to 1 replicas
	I1026 15:13:36.367622 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:13:36.367650 1113766 machine.go:96] duration metric: took 4.669682302s to provisionDockerMachine
	I1026 15:13:36.367675 1113766 start.go:293] postStartSetup for "embed-certs-535130" (driver="docker")
	I1026 15:13:36.367689 1113766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:13:36.367750 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:13:36.367797 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.388448 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.492995 1113766 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:13:36.496912 1113766 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:13:36.496990 1113766 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:13:36.497005 1113766 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:13:36.497441 1113766 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:13:36.497581 1113766 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:13:36.497738 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:13:36.506836 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:36.525298 1113766 start.go:296] duration metric: took 157.60468ms for postStartSetup
	I1026 15:13:36.525405 1113766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:13:36.525460 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.544951 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.644413 1113766 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:13:36.649341 1113766 fix.go:56] duration metric: took 5.281758238s for fixHost
	I1026 15:13:36.649370 1113766 start.go:83] releasing machines lock for "embed-certs-535130", held for 5.281812223s
	I1026 15:13:36.649447 1113766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:13:36.667811 1113766 ssh_runner.go:195] Run: cat /version.json
	I1026 15:13:36.667869 1113766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:13:36.667877 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.667930 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.687798 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.688085 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.842869 1113766 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:36.849931 1113766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:13:36.885592 1113766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:13:36.890858 1113766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:13:36.890935 1113766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:13:36.899349 1113766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:13:36.899377 1113766 start.go:495] detecting cgroup driver to use...
	I1026 15:13:36.899413 1113766 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:13:36.899462 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:13:36.915265 1113766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:13:36.928368 1113766 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:13:36.928419 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:13:36.943985 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:13:36.957590 1113766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:13:37.049991 1113766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:13:37.136299 1113766 docker.go:234] disabling docker service ...
	I1026 15:13:37.136360 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:13:37.151928 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:13:37.165026 1113766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:13:37.251238 1113766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:13:37.336342 1113766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:13:37.348920 1113766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:13:37.365703 1113766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:13:37.365769 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.375313 1113766 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:13:37.375377 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.385150 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.394723 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.404588 1113766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:13:37.413415 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.423296 1113766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.432517 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.441828 1113766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:13:37.449865 1113766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:13:37.457461 1113766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:37.547638 1113766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:37.664727 1113766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:37.664798 1113766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:37.668861 1113766 start.go:563] Will wait 60s for crictl version
	I1026 15:13:37.668919 1113766 ssh_runner.go:195] Run: which crictl
	I1026 15:13:37.672511 1113766 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:13:37.701474 1113766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:13:37.701556 1113766 ssh_runner.go:195] Run: crio --version
	I1026 15:13:37.731543 1113766 ssh_runner.go:195] Run: crio --version
	I1026 15:13:37.765561 1113766 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:13:33.556906 1114752 out.go:252] * Restarting existing docker container for "newest-cni-450976" ...
	I1026 15:13:33.556988 1114752 cli_runner.go:164] Run: docker start newest-cni-450976
	I1026 15:13:33.822470 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:33.842102 1114752 kic.go:430] container "newest-cni-450976" state is running.
	I1026 15:13:33.842808 1114752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450976
	I1026 15:13:33.863064 1114752 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/config.json ...
	I1026 15:13:33.863323 1114752 machine.go:93] provisionDockerMachine start ...
	I1026 15:13:33.863396 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:33.884364 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:33.884687 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:33.884704 1114752 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:13:33.885475 1114752 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35906->127.0.0.1:33867: read: connection reset by peer
	I1026 15:13:37.031343 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-450976
	
	I1026 15:13:37.031380 1114752 ubuntu.go:182] provisioning hostname "newest-cni-450976"
	I1026 15:13:37.031446 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.051564 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:37.051811 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:37.051826 1114752 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-450976 && echo "newest-cni-450976" | sudo tee /etc/hostname
	I1026 15:13:37.213333 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-450976
	
	I1026 15:13:37.213420 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.231946 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:37.232310 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:37.232342 1114752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450976/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:13:37.380632 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:13:37.380665 1114752 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:13:37.380705 1114752 ubuntu.go:190] setting up certificates
	I1026 15:13:37.380727 1114752 provision.go:84] configureAuth start
	I1026 15:13:37.380796 1114752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450976
	I1026 15:13:37.399725 1114752 provision.go:143] copyHostCerts
	I1026 15:13:37.399829 1114752 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:13:37.399846 1114752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:13:37.399931 1114752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:13:37.400150 1114752 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:13:37.400182 1114752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:13:37.400227 1114752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:13:37.400369 1114752 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:13:37.400382 1114752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:13:37.400421 1114752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:13:37.400512 1114752 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450976 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-450976]
	I1026 15:13:37.763701 1114752 provision.go:177] copyRemoteCerts
	I1026 15:13:37.763767 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:13:37.763819 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.783049 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:37.887525 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:13:37.906903 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:13:37.926587 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:13:37.944386 1114752 provision.go:87] duration metric: took 563.640766ms to configureAuth
	I1026 15:13:37.944414 1114752 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:13:37.944614 1114752 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:37.944731 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.964140 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:37.964409 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:37.964428 1114752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:13:38.255873 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:13:38.255901 1114752 machine.go:96] duration metric: took 4.392559982s to provisionDockerMachine
	I1026 15:13:38.255917 1114752 start.go:293] postStartSetup for "newest-cni-450976" (driver="docker")
	I1026 15:13:38.255931 1114752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:13:38.256000 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:13:38.256055 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.275739 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:37.766706 1113766 cli_runner.go:164] Run: docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:37.785593 1113766 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:37.789819 1113766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:37.800845 1113766 kubeadm.go:883] updating cluster {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:37.801020 1113766 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:37.801095 1113766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:37.834876 1113766 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:37.834902 1113766 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:37.834962 1113766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:37.861255 1113766 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:37.861279 1113766 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:37.861322 1113766 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:13:37.861435 1113766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-535130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:37.861503 1113766 ssh_runner.go:195] Run: crio config
	I1026 15:13:37.912692 1113766 cni.go:84] Creating CNI manager for ""
	I1026 15:13:37.912714 1113766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:37.912747 1113766 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:13:37.912784 1113766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-535130 NodeName:embed-certs-535130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:37.912927 1113766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-535130"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:37.913000 1113766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:13:37.921351 1113766 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:37.921430 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:37.929571 1113766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:13:37.942588 1113766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:37.955865 1113766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:13:37.970133 1113766 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:37.974032 1113766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:37.985196 1113766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:38.073069 1113766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:38.095947 1113766 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130 for IP: 192.168.76.2
	I1026 15:13:38.095969 1113766 certs.go:195] generating shared ca certs ...
	I1026 15:13:38.095990 1113766 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.096157 1113766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:13:38.096247 1113766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:13:38.096263 1113766 certs.go:257] generating profile certs ...
	I1026 15:13:38.096402 1113766 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key
	I1026 15:13:38.096505 1113766 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3
	I1026 15:13:38.096557 1113766 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key
	I1026 15:13:38.096790 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:13:38.096865 1113766 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:38.096882 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:13:38.096913 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:38.096948 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:38.096970 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:38.097027 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:38.097985 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:38.117316 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:13:38.141746 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:38.162963 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:13:38.188391 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:13:38.209813 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:13:38.228846 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:38.247538 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:13:38.267934 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:13:38.287253 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:13:38.306737 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:38.325248 1113766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:38.338026 1113766 ssh_runner.go:195] Run: openssl version
	I1026 15:13:38.344312 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:13:38.353974 1113766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:13:38.358501 1113766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:13:38.358573 1113766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:13:38.395847 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:38.404522 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:13:38.414054 1113766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:13:38.418460 1113766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:13:38.418516 1113766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:13:38.454059 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:38.462770 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:38.471399 1113766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:38.475250 1113766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:38.475300 1113766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:38.510924 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:38.519486 1113766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:38.523384 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:13:38.561625 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:13:38.601091 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:13:38.651816 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:13:38.697241 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:13:38.754098 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:13:38.813908 1113766 kubeadm.go:400] StartCluster: {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:38.814039 1113766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:38.814105 1113766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:38.850213 1113766 cri.go:89] found id: "79f294b1af5377dbbe09bff36c0ce752c337fff26f468f52ba372eeae7c2fbd7"
	I1026 15:13:38.850237 1113766 cri.go:89] found id: "0cf664b8ea8fd4397a4e4d0903d086cb617b472ad1631050bc542a9e5c06ca09"
	I1026 15:13:38.850243 1113766 cri.go:89] found id: "43565d9e1913984f12b45a1203fca769c7b760ccf18830408972ff108c39b9bf"
	I1026 15:13:38.850248 1113766 cri.go:89] found id: "7f30d07b339ab7331f72cd45f5f34ee9c7eb82bec1197a77db9c34d2fcb6c24b"
	I1026 15:13:38.850252 1113766 cri.go:89] found id: ""
	I1026 15:13:38.850291 1113766 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:13:38.865515 1113766 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:38Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:38.865607 1113766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:13:38.878426 1113766 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:13:38.878485 1113766 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:13:38.878632 1113766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:13:38.890199 1113766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:13:38.891157 1113766 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-535130" does not appear in /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:38.891705 1113766 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-841519/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-535130" cluster setting kubeconfig missing "embed-certs-535130" context setting]
	I1026 15:13:38.892512 1113766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.894605 1113766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:13:38.904931 1113766 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 15:13:38.904968 1113766 kubeadm.go:601] duration metric: took 26.475851ms to restartPrimaryControlPlane
	I1026 15:13:38.904979 1113766 kubeadm.go:402] duration metric: took 91.083527ms to StartCluster
	I1026 15:13:38.904999 1113766 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.905074 1113766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:38.907087 1113766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.907395 1113766 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:38.907661 1113766 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:38.907720 1113766 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:38.907797 1113766 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-535130"
	I1026 15:13:38.907828 1113766 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-535130"
	W1026 15:13:38.907836 1113766 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:13:38.907864 1113766 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:13:38.908130 1113766 addons.go:69] Setting dashboard=true in profile "embed-certs-535130"
	I1026 15:13:38.908184 1113766 addons.go:238] Setting addon dashboard=true in "embed-certs-535130"
	W1026 15:13:38.908193 1113766 addons.go:247] addon dashboard should already be in state true
	I1026 15:13:38.908220 1113766 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:13:38.908373 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.908397 1113766 addons.go:69] Setting default-storageclass=true in profile "embed-certs-535130"
	I1026 15:13:38.908423 1113766 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-535130"
	I1026 15:13:38.908708 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.908740 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.910199 1113766 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:38.912350 1113766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:38.938355 1113766 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:13:38.940173 1113766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:38.940965 1113766 addons.go:238] Setting addon default-storageclass=true in "embed-certs-535130"
	W1026 15:13:38.940987 1113766 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:13:38.941016 1113766 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:13:38.941438 1113766 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:13:38.376923 1114752 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:13:38.380788 1114752 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:13:38.380831 1114752 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:13:38.380846 1114752 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:13:38.380907 1114752 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:13:38.381022 1114752 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:13:38.381143 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:13:38.389543 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:38.409203 1114752 start.go:296] duration metric: took 153.266796ms for postStartSetup
	I1026 15:13:38.409313 1114752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:13:38.409379 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.429864 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:38.528464 1114752 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:13:38.533391 1114752 fix.go:56] duration metric: took 4.998404293s for fixHost
	I1026 15:13:38.533463 1114752 start.go:83] releasing machines lock for "newest-cni-450976", held for 4.998465392s
	I1026 15:13:38.533543 1114752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450976
	I1026 15:13:38.553571 1114752 ssh_runner.go:195] Run: cat /version.json
	I1026 15:13:38.553643 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.553654 1114752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:13:38.553767 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.574284 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:38.574504 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:38.676051 1114752 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:38.758754 1114752 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:13:38.813141 1114752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:13:38.819297 1114752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:13:38.819357 1114752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:13:38.830001 1114752 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:13:38.830033 1114752 start.go:495] detecting cgroup driver to use...
	I1026 15:13:38.830069 1114752 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:13:38.830116 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:13:38.850256 1114752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:13:38.868194 1114752 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:13:38.868253 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:13:38.891101 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:13:38.910824 1114752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:13:39.062926 1114752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:13:39.179139 1114752 docker.go:234] disabling docker service ...
	I1026 15:13:39.179229 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:13:39.201712 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:13:39.223985 1114752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:13:39.330872 1114752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:13:39.440045 1114752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:13:39.456995 1114752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:13:39.475173 1114752 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:13:39.475235 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.485838 1114752 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:13:39.485911 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.497890 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.509311 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.521401 1114752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:13:39.531708 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.545553 1114752 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.558867 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.572132 1114752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:13:39.582550 1114752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:13:39.592870 1114752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:39.728450 1114752 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:39.862260 1114752 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:39.862332 1114752 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:39.867333 1114752 start.go:563] Will wait 60s for crictl version
	I1026 15:13:39.867406 1114752 ssh_runner.go:195] Run: which crictl
	I1026 15:13:39.872243 1114752 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:13:39.903804 1114752 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:13:39.903885 1114752 ssh_runner.go:195] Run: crio --version
	I1026 15:13:39.940255 1114752 ssh_runner.go:195] Run: crio --version
	I1026 15:13:39.980141 1114752 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:13:39.981402 1114752 cli_runner.go:164] Run: docker network inspect newest-cni-450976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:40.009905 1114752 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:40.014650 1114752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:40.027719 1114752 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:13:38.941506 1113766 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:38.941523 1113766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:38.941613 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:38.941532 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.942632 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:13:38.942653 1113766 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:13:38.942702 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:38.976439 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:38.978129 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:38.980802 1113766 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:38.980863 1113766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:38.981009 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:39.014150 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:39.104677 1113766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:39.122534 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:13:39.122561 1113766 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:13:39.123493 1113766 node_ready.go:35] waiting up to 6m0s for node "embed-certs-535130" to be "Ready" ...
	I1026 15:13:39.128461 1113766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:39.137116 1113766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:39.143559 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:13:39.143586 1113766 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:13:39.164258 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:13:39.164287 1113766 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:13:39.185403 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:13:39.185487 1113766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:13:39.204860 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:13:39.204885 1113766 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:13:39.231456 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:13:39.231485 1113766 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:13:39.247596 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:13:39.247622 1113766 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:13:39.268795 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:13:39.268827 1113766 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:13:39.285975 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:39.286003 1113766 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:13:39.300204 1113766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:40.480787 1113766 node_ready.go:49] node "embed-certs-535130" is "Ready"
	I1026 15:13:40.480821 1113766 node_ready.go:38] duration metric: took 1.357286103s for node "embed-certs-535130" to be "Ready" ...
	I1026 15:13:40.480838 1113766 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:13:40.480891 1113766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:13:41.063718 1113766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.926561303s)
	I1026 15:13:41.064077 1113766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.763825551s)
	I1026 15:13:41.064352 1113766 api_server.go:72] duration metric: took 2.156917709s to wait for apiserver process to appear ...
	I1026 15:13:41.064364 1113766 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:13:41.064384 1113766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:13:41.066580 1113766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.938084789s)
	I1026 15:13:41.068720 1113766 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-535130 addons enable metrics-server
	
	I1026 15:13:41.072450 1113766 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:13:41.072475 1113766 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:13:41.079563 1113766 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:13:41.081450 1113766 addons.go:514] duration metric: took 2.1737229s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:13:40.028927 1114752 kubeadm.go:883] updating cluster {Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:40.029111 1114752 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:40.029202 1114752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:40.066752 1114752 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:40.066779 1114752 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:40.066837 1114752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:40.095689 1114752 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:40.095711 1114752 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:40.095719 1114752 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1026 15:13:40.095834 1114752 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-450976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:40.095896 1114752 ssh_runner.go:195] Run: crio config
	I1026 15:13:40.174353 1114752 cni.go:84] Creating CNI manager for ""
	I1026 15:13:40.174388 1114752 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:40.174417 1114752 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:13:40.174447 1114752 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450976 NodeName:newest-cni-450976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:40.174628 1114752 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-450976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:40.174714 1114752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:13:40.185063 1114752 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:40.185142 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:40.193834 1114752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:13:40.207803 1114752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:40.221135 1114752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:13:40.235918 1114752 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:40.239959 1114752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:40.256497 1114752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:40.359653 1114752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:40.395140 1114752 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976 for IP: 192.168.103.2
	I1026 15:13:40.395205 1114752 certs.go:195] generating shared ca certs ...
	I1026 15:13:40.395229 1114752 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:40.395390 1114752 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:13:40.395438 1114752 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:13:40.395452 1114752 certs.go:257] generating profile certs ...
	I1026 15:13:40.395587 1114752 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/client.key
	I1026 15:13:40.395677 1114752 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/apiserver.key.6904aab9
	I1026 15:13:40.395726 1114752 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/proxy-client.key
	I1026 15:13:40.395894 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:13:40.395936 1114752 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:40.395950 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:13:40.395985 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:40.396018 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:40.396050 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:40.396105 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:40.396848 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:40.428740 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:13:40.467100 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:40.505682 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:13:40.537741 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:13:40.570121 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:13:40.595584 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:40.623177 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:13:40.644134 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:13:40.667283 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:13:40.688417 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:40.708044 1114752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:40.721964 1114752 ssh_runner.go:195] Run: openssl version
	I1026 15:13:40.729493 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:40.740489 1114752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:40.745099 1114752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:40.745235 1114752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:40.783506 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:40.793310 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:13:40.803928 1114752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:13:40.808231 1114752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:13:40.808294 1114752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:13:40.855542 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:40.865852 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:13:40.876943 1114752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:13:40.881960 1114752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:13:40.882035 1114752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:13:40.930751 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:40.941036 1114752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:40.946656 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:13:40.990223 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:13:41.044589 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:13:41.095556 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:13:41.150431 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:13:41.203945 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:13:41.260135 1114752 kubeadm.go:400] StartCluster: {Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:41.260282 1114752 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:41.260381 1114752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:41.312408 1114752 cri.go:89] found id: "d301b19a9754fef9062ff0ab32cef39843a3b341f9c9c9c979ce50772e060f34"
	I1026 15:13:41.312495 1114752 cri.go:89] found id: "eca31c4960e5fee40ff7a27e80d78ba23e050229040a9c119c1a39d6d964c134"
	I1026 15:13:41.312506 1114752 cri.go:89] found id: "7b4821416cdb1f5a1c75031b5a1a9853efa078e8f2964c61061e443a8fe518d0"
	I1026 15:13:41.312512 1114752 cri.go:89] found id: "dad7b5a044afb9affbe248c4fce4bf89b73634fb0298fd50fe83199eecb4779f"
	I1026 15:13:41.312516 1114752 cri.go:89] found id: ""
	I1026 15:13:41.312586 1114752 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:13:41.328414 1114752 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:41Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:41.328490 1114752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:13:41.339143 1114752 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:13:41.339202 1114752 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:13:41.339274 1114752 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:13:41.349811 1114752 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:13:41.351328 1114752 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-450976" does not appear in /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:41.352331 1114752 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-841519/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-450976" cluster setting kubeconfig missing "newest-cni-450976" context setting]
	I1026 15:13:41.353686 1114752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:41.356482 1114752 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:13:41.368106 1114752 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1026 15:13:41.368228 1114752 kubeadm.go:601] duration metric: took 29.01603ms to restartPrimaryControlPlane
	I1026 15:13:41.368248 1114752 kubeadm.go:402] duration metric: took 108.140463ms to StartCluster
	I1026 15:13:41.368309 1114752 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:41.368403 1114752 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:41.371525 1114752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:41.371844 1114752 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:41.371893 1114752 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:41.371998 1114752 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-450976"
	I1026 15:13:41.372027 1114752 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-450976"
	W1026 15:13:41.372049 1114752 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:13:41.372062 1114752 addons.go:69] Setting dashboard=true in profile "newest-cni-450976"
	I1026 15:13:41.372077 1114752 addons.go:238] Setting addon dashboard=true in "newest-cni-450976"
	I1026 15:13:41.372081 1114752 host.go:66] Checking if "newest-cni-450976" exists ...
	W1026 15:13:41.372084 1114752 addons.go:247] addon dashboard should already be in state true
	I1026 15:13:41.372094 1114752 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:41.372107 1114752 host.go:66] Checking if "newest-cni-450976" exists ...
	I1026 15:13:41.372146 1114752 addons.go:69] Setting default-storageclass=true in profile "newest-cni-450976"
	I1026 15:13:41.372184 1114752 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-450976"
	I1026 15:13:41.372469 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.372627 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.372627 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.375710 1114752 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:41.377073 1114752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:41.403083 1114752 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:13:41.403092 1114752 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:41.404381 1114752 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:41.404403 1114752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:41.404443 1114752 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1026 15:13:37.050036 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	W1026 15:13:39.051344 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	W1026 15:13:41.550747 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	I1026 15:13:41.404459 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:41.405303 1114752 addons.go:238] Setting addon default-storageclass=true in "newest-cni-450976"
	W1026 15:13:41.405323 1114752 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:13:41.405352 1114752 host.go:66] Checking if "newest-cni-450976" exists ...
	I1026 15:13:41.405598 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:13:41.405622 1114752 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:13:41.405701 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:41.405848 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.434754 1114752 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:41.434778 1114752 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:41.435001 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:41.440367 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:41.440381 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:41.468417 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:41.570403 1114752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:41.626524 1114752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:41.637974 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:13:41.638018 1114752 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:13:41.638354 1114752 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:13:41.638413 1114752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:13:41.649500 1114752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:41.701715 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:13:41.701748 1114752 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:13:41.758574 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:13:41.758600 1114752 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:13:41.789831 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:13:41.789857 1114752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:13:41.820110 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:13:41.820295 1114752 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:13:41.845585 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:13:41.846232 1114752 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:13:41.871736 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:13:41.871894 1114752 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:13:41.895283 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:13:41.895922 1114752 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:13:41.920762 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:41.920789 1114752 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:13:41.947088 1114752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:43.537431 1114752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.910821538s)
	I1026 15:13:43.537498 1114752 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.899063191s)
	I1026 15:13:43.537512 1114752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.887990023s)
	I1026 15:13:43.537531 1114752 api_server.go:72] duration metric: took 2.165650284s to wait for apiserver process to appear ...
	I1026 15:13:43.537541 1114752 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:13:43.537564 1114752 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:13:43.537678 1114752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.590549527s)
	I1026 15:13:43.539310 1114752 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-450976 addons enable metrics-server
	
	I1026 15:13:43.546753 1114752 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:13:43.546780 1114752 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:13:43.553499 1114752 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:13:43.554571 1114752 addons.go:514] duration metric: took 2.18268422s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:13:44.038396 1114752 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:13:44.045081 1114752 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:13:44.045119 1114752 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:13:44.537650 1114752 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:13:44.541643 1114752 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 15:13:44.542688 1114752 api_server.go:141] control plane version: v1.34.1
	I1026 15:13:44.542717 1114752 api_server.go:131] duration metric: took 1.005167152s to wait for apiserver health ...
	I1026 15:13:44.542729 1114752 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:13:44.546030 1114752 system_pods.go:59] 8 kube-system pods found
	I1026 15:13:44.546057 1114752 system_pods.go:61] "coredns-66bc5c9577-7jwrr" [c1acc555-e2da-4acf-ac6d-6818ea2173d5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:13:44.546064 1114752 system_pods.go:61] "etcd-newest-cni-450976" [5ee64166-247f-49ca-9212-b4c60c0152c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:13:44.546072 1114752 system_pods.go:61] "kindnet-9tqxv" [d6ade61f-e6fb-4746-9b65-ce10129cd53e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:13:44.546077 1114752 system_pods.go:61] "kube-apiserver-newest-cni-450976" [a2aa9446-3bbe-45c4-902b-07e7773290bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:13:44.546083 1114752 system_pods.go:61] "kube-controller-manager-newest-cni-450976" [0ae3b699-6a5a-41d6-b223-9f6858f990cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:13:44.546096 1114752 system_pods.go:61] "kube-proxy-jfm7b" [6e6c6e48-eb1f-4a31-9cf4-390096851e53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:13:44.546102 1114752 system_pods.go:61] "kube-scheduler-newest-cni-450976" [8a2965f8-8545-46fd-bcf3-cc767c87b873] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:13:44.546107 1114752 system_pods.go:61] "storage-provisioner" [7182c30a-3cfc-49ba-b2d8-ee172f0272dd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:13:44.546115 1114752 system_pods.go:74] duration metric: took 3.379927ms to wait for pod list to return data ...
	I1026 15:13:44.546126 1114752 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:13:44.548446 1114752 default_sa.go:45] found service account: "default"
	I1026 15:13:44.548466 1114752 default_sa.go:55] duration metric: took 2.333903ms for default service account to be created ...
	I1026 15:13:44.548476 1114752 kubeadm.go:586] duration metric: took 3.176596107s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:13:44.548495 1114752 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:44.550942 1114752 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:13:44.550972 1114752 node_conditions.go:123] node cpu capacity is 8
	I1026 15:13:44.550988 1114752 node_conditions.go:105] duration metric: took 2.487701ms to run NodePressure ...
	I1026 15:13:44.551004 1114752 start.go:241] waiting for startup goroutines ...
	I1026 15:13:44.551016 1114752 start.go:246] waiting for cluster config update ...
	I1026 15:13:44.551030 1114752 start.go:255] writing updated cluster config ...
	I1026 15:13:44.551393 1114752 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:44.601145 1114752 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:13:44.603093 1114752 out.go:179] * Done! kubectl is now configured to use "newest-cni-450976" cluster and "default" namespace by default
	I1026 15:13:41.565077 1113766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:13:41.583478 1113766 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:13:41.583523 1113766 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:13:42.065228 1113766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:13:42.070792 1113766 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 15:13:42.072076 1113766 api_server.go:141] control plane version: v1.34.1
	I1026 15:13:42.072113 1113766 api_server.go:131] duration metric: took 1.007740479s to wait for apiserver health ...
	I1026 15:13:42.072124 1113766 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:13:42.076256 1113766 system_pods.go:59] 8 kube-system pods found
	I1026 15:13:42.076293 1113766 system_pods.go:61] "coredns-66bc5c9577-pnbct" [5ed72083-0ec8-4686-be6f-962755eee655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:42.076305 1113766 system_pods.go:61] "etcd-embed-certs-535130" [5a890218-8e8c-4072-a89d-dec140b353f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:13:42.076313 1113766 system_pods.go:61] "kindnet-mlqjm" [526c1bc2-396a-4668-8248-d95483175948] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:13:42.076325 1113766 system_pods.go:61] "kube-apiserver-embed-certs-535130" [5e297bec-df61-4675-b6d7-1d5a67e0f3e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:13:42.076341 1113766 system_pods.go:61] "kube-controller-manager-embed-certs-535130" [de44f030-b276-41e4-9194-8ff5827569ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:13:42.076349 1113766 system_pods.go:61] "kube-proxy-nbr2d" [6afa7745-4329-4477-9744-1aa5b789adc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:13:42.076356 1113766 system_pods.go:61] "kube-scheduler-embed-certs-535130" [39891617-036e-4f05-a816-1b7418d2b3f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:13:42.076363 1113766 system_pods.go:61] "storage-provisioner" [ecac2fee-1c15-4fee-9ccd-cf42d0a041c3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:42.076372 1113766 system_pods.go:74] duration metric: took 4.239846ms to wait for pod list to return data ...
	I1026 15:13:42.076383 1113766 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:13:42.079484 1113766 default_sa.go:45] found service account: "default"
	I1026 15:13:42.079503 1113766 default_sa.go:55] duration metric: took 3.114492ms for default service account to be created ...
	I1026 15:13:42.079512 1113766 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:13:42.082915 1113766 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:42.082945 1113766 system_pods.go:89] "coredns-66bc5c9577-pnbct" [5ed72083-0ec8-4686-be6f-962755eee655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:42.082955 1113766 system_pods.go:89] "etcd-embed-certs-535130" [5a890218-8e8c-4072-a89d-dec140b353f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:13:42.082967 1113766 system_pods.go:89] "kindnet-mlqjm" [526c1bc2-396a-4668-8248-d95483175948] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:13:42.082975 1113766 system_pods.go:89] "kube-apiserver-embed-certs-535130" [5e297bec-df61-4675-b6d7-1d5a67e0f3e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:13:42.082983 1113766 system_pods.go:89] "kube-controller-manager-embed-certs-535130" [de44f030-b276-41e4-9194-8ff5827569ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:13:42.082991 1113766 system_pods.go:89] "kube-proxy-nbr2d" [6afa7745-4329-4477-9744-1aa5b789adc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:13:42.082998 1113766 system_pods.go:89] "kube-scheduler-embed-certs-535130" [39891617-036e-4f05-a816-1b7418d2b3f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:13:42.083006 1113766 system_pods.go:89] "storage-provisioner" [ecac2fee-1c15-4fee-9ccd-cf42d0a041c3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:42.083016 1113766 system_pods.go:126] duration metric: took 3.497362ms to wait for k8s-apps to be running ...
	I1026 15:13:42.083025 1113766 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:13:42.083073 1113766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:42.099508 1113766 system_svc.go:56] duration metric: took 16.470343ms WaitForService to wait for kubelet
	I1026 15:13:42.099540 1113766 kubeadm.go:586] duration metric: took 3.192109292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:42.099663 1113766 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:42.103491 1113766 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:13:42.103592 1113766 node_conditions.go:123] node cpu capacity is 8
	I1026 15:13:42.103633 1113766 node_conditions.go:105] duration metric: took 3.962936ms to run NodePressure ...
	I1026 15:13:42.103651 1113766 start.go:241] waiting for startup goroutines ...
	I1026 15:13:42.103660 1113766 start.go:246] waiting for cluster config update ...
	I1026 15:13:42.103674 1113766 start.go:255] writing updated cluster config ...
	I1026 15:13:42.104029 1113766 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:42.108545 1113766 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:42.117346 1113766 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pnbct" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:13:44.124646 1113766 pod_ready.go:104] pod "coredns-66bc5c9577-pnbct" is not "Ready", error: <nil>
	W1026 15:13:44.050572 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	W1026 15:13:46.050839 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	I1026 15:13:46.550212 1107827 node_ready.go:49] node "auto-498531" is "Ready"
	I1026 15:13:46.550250 1107827 node_ready.go:38] duration metric: took 11.50388973s for node "auto-498531" to be "Ready" ...
	I1026 15:13:46.550267 1107827 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:13:46.550338 1107827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	
	==> CRI-O <==
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.828146276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.830979782Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7bfd36c4-6ce0-47e4-bc4b-e8b5763bbf06 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.831835582Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a6f736c3-45da-4bc2-9608-fc5943c4435a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.832796517Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.833289016Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.833767325Z" level=info msg="Ran pod sandbox 6a13811aad91b61b77733094a452623be770779c53983d308398d24b6ca27333 with infra container: kube-system/kindnet-9tqxv/POD" id=7bfd36c4-6ce0-47e4-bc4b-e8b5763bbf06 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.833974462Z" level=info msg="Ran pod sandbox c5bd8a9276e64d520549c0ec08d3b8735d232153d0020848bc173f4a22e52107 with infra container: kube-system/kube-proxy-jfm7b/POD" id=a6f736c3-45da-4bc2-9608-fc5943c4435a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.835277368Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=13b3a378-6997-4f01-807f-acb2f4105568 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.835330245Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=282ec152-f0dd-422d-84eb-229f59a4fd8a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.836383551Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=23c06f72-980d-4523-96c9-8d7ee5af8027 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.836413557Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=21ff233c-06ba-4248-b457-5d4f2266a703 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.837634887Z" level=info msg="Creating container: kube-system/kube-proxy-jfm7b/kube-proxy" id=03318cee-5c38-4554-bd2b-0d9e0aa6de76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.837662428Z" level=info msg="Creating container: kube-system/kindnet-9tqxv/kindnet-cni" id=6284997d-5769-4371-a5e0-6b08ff48ab71 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.837757578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.837767342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.843450987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.84450578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.846388751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.846780337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.872280019Z" level=info msg="Created container 28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80: kube-system/kindnet-9tqxv/kindnet-cni" id=6284997d-5769-4371-a5e0-6b08ff48ab71 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.872964192Z" level=info msg="Starting container: 28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80" id=ebfef72f-d2ae-4526-822b-84855d1ff1ef name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.875126497Z" level=info msg="Started container" PID=1034 containerID=28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80 description=kube-system/kindnet-9tqxv/kindnet-cni id=ebfef72f-d2ae-4526-822b-84855d1ff1ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a13811aad91b61b77733094a452623be770779c53983d308398d24b6ca27333
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.8757217Z" level=info msg="Created container 4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9: kube-system/kube-proxy-jfm7b/kube-proxy" id=03318cee-5c38-4554-bd2b-0d9e0aa6de76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.876513602Z" level=info msg="Starting container: 4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9" id=3058dddb-bfac-4768-9ff3-c9e117a05c74 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.87945866Z" level=info msg="Started container" PID=1035 containerID=4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9 description=kube-system/kube-proxy-jfm7b/kube-proxy id=3058dddb-bfac-4768-9ff3-c9e117a05c74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c5bd8a9276e64d520549c0ec08d3b8735d232153d0020848bc173f4a22e52107
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4067fc481bc4b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   c5bd8a9276e64       kube-proxy-jfm7b                            kube-system
	28e4049021789       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   6a13811aad91b       kindnet-9tqxv                               kube-system
	d301b19a9754f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   e377ee3bf177b       kube-controller-manager-newest-cni-450976   kube-system
	eca31c4960e5f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   2a22a8c561c92       etcd-newest-cni-450976                      kube-system
	7b4821416cdb1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   310c3819897f1       kube-scheduler-newest-cni-450976            kube-system
	dad7b5a044afb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   b652e82ca92ce       kube-apiserver-newest-cni-450976            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-450976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-450976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=newest-cni-450976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_13_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:13:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-450976
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:13:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:13:43 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:13:43 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:13:43 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 15:13:43 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-450976
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                1575f574-b7cf-4d6a-9ab9-f0fb8538a042
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-450976                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-9tqxv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-450976             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-450976    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-jfm7b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-450976             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 4s    kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node newest-cni-450976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node newest-cni-450976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node newest-cni-450976 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node newest-cni-450976 event: Registered Node newest-cni-450976 in Controller
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-450976 event: Registered Node newest-cni-450976 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [eca31c4960e5fee40ff7a27e80d78ba23e050229040a9c119c1a39d6d964c134] <==
	{"level":"warn","ts":"2025-10-26T15:13:42.131856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.149079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.157300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.166648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.176362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.184684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.192294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.201492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.210373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.219227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.227332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.237284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.247658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.267813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.278855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.291185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.306279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.317277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.324021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.333320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.341640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.357354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.366053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.376454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.443284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:13:49 up  2:56,  0 user,  load average: 3.33, 2.70, 1.82
	Linux newest-cni-450976 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80] <==
	I1026 15:13:44.037840       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:13:44.133039       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 15:13:44.133381       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:13:44.133401       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:13:44.133450       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:13:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:13:44.337210       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:13:44.337388       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:13:44.337408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:13:44.337557       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:13:44.638246       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:13:44.638275       1 metrics.go:72] Registering metrics
	I1026 15:13:44.638355       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [dad7b5a044afb9affbe248c4fce4bf89b73634fb0298fd50fe83199eecb4779f] <==
	I1026 15:13:42.981324       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:13:42.981343       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:13:42.981349       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:13:42.981358       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:13:42.981466       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:13:42.981475       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:13:42.981482       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:13:42.981527       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:13:42.981583       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:13:42.981899       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:13:42.984662       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:13:42.988889       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:13:43.003561       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:13:43.300093       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:13:43.335564       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:13:43.362268       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:13:43.371565       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:13:43.379719       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:13:43.417006       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.185.174"}
	I1026 15:13:43.429825       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.172.68"}
	I1026 15:13:43.890024       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:13:46.556387       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:13:46.654904       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:13:46.805606       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:13:46.805606       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d301b19a9754fef9062ff0ab32cef39843a3b341f9c9c9c979ce50772e060f34] <==
	I1026 15:13:46.314235       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 15:13:46.314341       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 15:13:46.315526       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 15:13:46.315548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 15:13:46.318800       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:13:46.321183       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:13:46.324458       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:13:46.328811       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:13:46.331120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:13:46.331124       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:13:46.335333       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:13:46.335585       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:13:46.335707       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-450976"
	I1026 15:13:46.335778       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 15:13:46.337176       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:13:46.339215       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:13:46.341462       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 15:13:46.342907       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:13:46.351043       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:13:46.351062       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:13:46.351070       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:13:46.351388       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:13:46.352260       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:13:46.352516       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:13:46.359607       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9] <==
	I1026 15:13:43.930656       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:13:44.002646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:13:44.103580       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:13:44.103626       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 15:13:44.103751       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:13:44.126070       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:13:44.126145       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:13:44.131651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:13:44.132983       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:13:44.133099       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:44.136538       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:13:44.136563       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:13:44.136599       1 config.go:200] "Starting service config controller"
	I1026 15:13:44.136618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:13:44.136633       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:13:44.136648       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:13:44.136803       1 config.go:309] "Starting node config controller"
	I1026 15:13:44.136904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:13:44.136939       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:13:44.236914       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:13:44.236971       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:13:44.236965       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7b4821416cdb1f5a1c75031b5a1a9853efa078e8f2964c61061e443a8fe518d0] <==
	I1026 15:13:41.459797       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:13:42.912494       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:13:42.912532       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:13:42.912545       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:13:42.912554       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:13:42.956801       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:13:42.956835       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:42.959117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:13:42.959177       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:13:42.960861       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:13:42.960988       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:13:43.059810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.024679     659 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.024783     659 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.024813     659 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.025730     659 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.051290     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-450976\" already exists" pod="kube-system/kube-scheduler-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.051323     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.060438     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-450976\" already exists" pod="kube-system/etcd-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.060476     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.067598     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-450976\" already exists" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.067642     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.074148     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-450976\" already exists" pod="kube-system/kube-controller-manager-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.399883     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.407530     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-450976\" already exists" pod="kube-system/kube-controller-manager-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.519202     659 apiserver.go:52] "Watching apiserver"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.522931     659 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527456     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e6c6e48-eb1f-4a31-9cf4-390096851e53-lib-modules\") pod \"kube-proxy-jfm7b\" (UID: \"6e6c6e48-eb1f-4a31-9cf4-390096851e53\") " pod="kube-system/kube-proxy-jfm7b"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527522     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6ade61f-e6fb-4746-9b65-ce10129cd53e-xtables-lock\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527618     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d6ade61f-e6fb-4746-9b65-ce10129cd53e-cni-cfg\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527653     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6ade61f-e6fb-4746-9b65-ce10129cd53e-lib-modules\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527725     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e6c6e48-eb1f-4a31-9cf4-390096851e53-xtables-lock\") pod \"kube-proxy-jfm7b\" (UID: \"6e6c6e48-eb1f-4a31-9cf4-390096851e53\") " pod="kube-system/kube-proxy-jfm7b"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.643033     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.651043     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-450976\" already exists" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:45 newest-cni-450976 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:13:45 newest-cni-450976 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:13:45 newest-cni-450976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-450976 -n newest-cni-450976
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-450976 -n newest-cni-450976: exit status 2 (446.602726ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-450976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-7jwrr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nfgb7 kubernetes-dashboard-855c9754f9-ztb74
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-450976 describe pod coredns-66bc5c9577-7jwrr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nfgb7 kubernetes-dashboard-855c9754f9-ztb74
I1026 15:13:49.958222  845095 config.go:182] Loaded profile config "auto-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-450976 describe pod coredns-66bc5c9577-7jwrr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nfgb7 kubernetes-dashboard-855c9754f9-ztb74: exit status 1 (87.685201ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-7jwrr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nfgb7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-ztb74" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-450976 describe pod coredns-66bc5c9577-7jwrr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nfgb7 kubernetes-dashboard-855c9754f9-ztb74: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-450976
helpers_test.go:243: (dbg) docker inspect newest-cni-450976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916",
	        "Created": "2025-10-26T15:12:59.003317793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1114953,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:13:33.584353704Z",
	            "FinishedAt": "2025-10-26T15:13:32.658442501Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/hostname",
	        "HostsPath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/hosts",
	        "LogPath": "/var/lib/docker/containers/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916/780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916-json.log",
	        "Name": "/newest-cni-450976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-450976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-450976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "780b6ec8823b2c38d1086c59e7fddd36420479fc7b248085a3cf4f4af2acf916",
	                "LowerDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe3ecd958d722f7448e33c5d5e455e3fd3a3f1954f672020596a899bb4dc58eb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-450976",
	                "Source": "/var/lib/docker/volumes/newest-cni-450976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-450976",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-450976",
	                "name.minikube.sigs.k8s.io": "newest-cni-450976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7e9411a676419dfeb2cd6927394356cb760dfa197e267d653bc022dbcacc23d",
	            "SandboxKey": "/var/run/docker/netns/e7e9411a6764",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33868"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33869"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33870"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-450976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:7c:1a:c5:4a:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4254446822c371d2067f0edad3ee1d5a391333ca11c0b013055abf6c85fb5682",
	                    "EndpointID": "12e5838078d6af1936af6d1081db262ef67ea3f1e7a35721b11fe8ff0cc0a8d1",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-450976",
	                        "780b6ec8823b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-450976 -n newest-cni-450976
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-450976 -n newest-cni-450976: exit status 2 (407.259635ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-450976 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-450976 logs -n 25: (1.538069761s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ image   │ no-preload-475081 image list --format=json                                                                                                                                                                                                    │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ pause   │ -p no-preload-475081 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │                     │
	│ delete  │ -p old-k8s-version-330914                                                                                                                                                                                                                     │ old-k8s-version-330914       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p disable-driver-mounts-619402                                                                                                                                                                                                               │ disable-driver-mounts-619402 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ delete  │ -p no-preload-475081                                                                                                                                                                                                                          │ no-preload-475081            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:12 UTC │
	│ start   │ -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p cert-expiration-619245 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:12 UTC │ 26 Oct 25 15:13 UTC │
	│ delete  │ -p cert-expiration-619245                                                                                                                                                                                                                     │ cert-expiration-619245       │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p auto-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-498531                  │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-535130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p embed-certs-535130 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p newest-cni-450976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p newest-cni-450976 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-535130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-535130           │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-450976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ start   │ -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-790012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-790012 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ image   │ newest-cni-450976 image list --format=json                                                                                                                                                                                                    │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	│ pause   │ -p newest-cni-450976 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-450976            │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │                     │
	│ ssh     │ -p auto-498531 pgrep -a kubelet                                                                                                                                                                                                               │ auto-498531                  │ jenkins │ v1.37.0 │ 26 Oct 25 15:13 UTC │ 26 Oct 25 15:13 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:13:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:13:33.334804 1114752 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:13:33.335030 1114752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:33.335037 1114752 out.go:374] Setting ErrFile to fd 2...
	I1026 15:13:33.335041 1114752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:13:33.335275 1114752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:13:33.335717 1114752 out.go:368] Setting JSON to false
	I1026 15:13:33.336864 1114752 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10561,"bootTime":1761481052,"procs":382,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:13:33.336965 1114752 start.go:141] virtualization: kvm guest
	I1026 15:13:33.338732 1114752 out.go:179] * [newest-cni-450976] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:13:33.340086 1114752 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:13:33.340115 1114752 notify.go:220] Checking for updates...
	I1026 15:13:33.342297 1114752 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:13:33.343663 1114752 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:33.344846 1114752 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:13:33.346031 1114752 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:13:33.347279 1114752 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:13:33.349221 1114752 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:33.349915 1114752 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:13:33.376031 1114752 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:13:33.376129 1114752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:33.438088 1114752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-26 15:13:33.426631481 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:33.438228 1114752 docker.go:318] overlay module found
	I1026 15:13:33.440047 1114752 out.go:179] * Using the docker driver based on existing profile
	I1026 15:13:33.441532 1114752 start.go:305] selected driver: docker
	I1026 15:13:33.441548 1114752 start.go:925] validating driver "docker" against &{Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:33.441657 1114752 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:13:33.442266 1114752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:13:33.505289 1114752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-26 15:13:33.494889004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:13:33.505603 1114752 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:13:33.505638 1114752 cni.go:84] Creating CNI manager for ""
	I1026 15:13:33.505687 1114752 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:33.505724 1114752 start.go:349] cluster config:
	{Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:33.508668 1114752 out.go:179] * Starting "newest-cni-450976" primary control-plane node in "newest-cni-450976" cluster
	I1026 15:13:33.510071 1114752 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:13:33.511479 1114752 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:13:33.512708 1114752 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:33.512753 1114752 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:13:33.512777 1114752 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:13:33.512801 1114752 cache.go:58] Caching tarball of preloaded images
	I1026 15:13:33.512888 1114752 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:13:33.512898 1114752 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:13:33.512995 1114752 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/config.json ...
	I1026 15:13:33.534783 1114752 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:13:33.534810 1114752 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:13:33.534834 1114752 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:13:33.534873 1114752 start.go:360] acquireMachinesLock for newest-cni-450976: {Name:mkd25f5c88d69734bd3a1425b2ee7adeba19f996 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:13:33.534945 1114752 start.go:364] duration metric: took 46.831µs to acquireMachinesLock for "newest-cni-450976"
	I1026 15:13:33.534970 1114752 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:13:33.534980 1114752 fix.go:54] fixHost starting: 
	I1026 15:13:33.535289 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:33.554995 1114752 fix.go:112] recreateIfNeeded on newest-cni-450976: state=Stopped err=<nil>
	W1026 15:13:33.555041 1114752 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:13:31.934443 1100384 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:31.934495 1100384 system_pods.go:89] "coredns-66bc5c9577-shw6l" [34b47d5d-504d-4f7a-905e-acd0787bad18] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:31.934506 1100384 system_pods.go:89] "etcd-default-k8s-diff-port-790012" [18a43e2a-b91b-4b24-a5f6-4ce939ee4840] Running
	I1026 15:13:31.934515 1100384 system_pods.go:89] "kindnet-7ch5r" [54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17] Running
	I1026 15:13:31.934521 1100384 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-790012" [cdf846a0-22e6-4261-abdc-bd5f72bdbc80] Running
	I1026 15:13:31.934528 1100384 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-790012" [4e9cad9b-4439-4d70-98c2-10b7fcd16c25] Running
	I1026 15:13:31.934533 1100384 system_pods.go:89] "kube-proxy-wk2nn" [928b7499-0464-4469-9f74-0e72935a8464] Running
	I1026 15:13:31.934539 1100384 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-790012" [80d7b5ad-decf-4b5f-a03f-4f63aed757a1] Running
	I1026 15:13:31.934547 1100384 system_pods.go:89] "storage-provisioner" [1f95e80f-9f93-44c4-b761-fd518de0c4d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:31.934574 1100384 retry.go:31] will retry after 343.738043ms: missing components: kube-dns
	I1026 15:13:32.282807 1100384 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:32.282849 1100384 system_pods.go:89] "coredns-66bc5c9577-shw6l" [34b47d5d-504d-4f7a-905e-acd0787bad18] Running
	I1026 15:13:32.282858 1100384 system_pods.go:89] "etcd-default-k8s-diff-port-790012" [18a43e2a-b91b-4b24-a5f6-4ce939ee4840] Running
	I1026 15:13:32.282865 1100384 system_pods.go:89] "kindnet-7ch5r" [54b7119d-e62c-46d9-a2a6-2f5a0f1e4e17] Running
	I1026 15:13:32.282871 1100384 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-790012" [cdf846a0-22e6-4261-abdc-bd5f72bdbc80] Running
	I1026 15:13:32.282875 1100384 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-790012" [4e9cad9b-4439-4d70-98c2-10b7fcd16c25] Running
	I1026 15:13:32.282878 1100384 system_pods.go:89] "kube-proxy-wk2nn" [928b7499-0464-4469-9f74-0e72935a8464] Running
	I1026 15:13:32.282881 1100384 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-790012" [80d7b5ad-decf-4b5f-a03f-4f63aed757a1] Running
	I1026 15:13:32.282886 1100384 system_pods.go:89] "storage-provisioner" [1f95e80f-9f93-44c4-b761-fd518de0c4d9] Running
	I1026 15:13:32.282897 1100384 system_pods.go:126] duration metric: took 891.5938ms to wait for k8s-apps to be running ...
	I1026 15:13:32.282914 1100384 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:13:32.282969 1100384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:32.296468 1100384 system_svc.go:56] duration metric: took 13.54263ms WaitForService to wait for kubelet
	I1026 15:13:32.296504 1100384 kubeadm.go:586] duration metric: took 12.759102603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:32.296526 1100384 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:32.299850 1100384 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:13:32.299878 1100384 node_conditions.go:123] node cpu capacity is 8
	I1026 15:13:32.299894 1100384 node_conditions.go:105] duration metric: took 3.363088ms to run NodePressure ...
	I1026 15:13:32.299907 1100384 start.go:241] waiting for startup goroutines ...
	I1026 15:13:32.299914 1100384 start.go:246] waiting for cluster config update ...
	I1026 15:13:32.299924 1100384 start.go:255] writing updated cluster config ...
	I1026 15:13:32.300234 1100384 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:32.304335 1100384 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:32.307473 1100384 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-shw6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.312359 1100384 pod_ready.go:94] pod "coredns-66bc5c9577-shw6l" is "Ready"
	I1026 15:13:32.312384 1100384 pod_ready.go:86] duration metric: took 4.8862ms for pod "coredns-66bc5c9577-shw6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.314462 1100384 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.318273 1100384 pod_ready.go:94] pod "etcd-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:32.318297 1100384 pod_ready.go:86] duration metric: took 3.808174ms for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.320377 1100384 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.324152 1100384 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:32.324183 1100384 pod_ready.go:86] duration metric: took 3.787572ms for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.325956 1100384 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.708420 1100384 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:32.708454 1100384 pod_ready.go:86] duration metric: took 382.476768ms for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:32.908230 1100384 pod_ready.go:83] waiting for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.308766 1100384 pod_ready.go:94] pod "kube-proxy-wk2nn" is "Ready"
	I1026 15:13:33.308793 1100384 pod_ready.go:86] duration metric: took 400.537302ms for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.509496 1100384 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.908459 1100384 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-790012" is "Ready"
	I1026 15:13:33.908489 1100384 pod_ready.go:86] duration metric: took 398.969559ms for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:33.908501 1100384 pod_ready.go:40] duration metric: took 1.604136935s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:33.958143 1100384 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:13:33.963345 1100384 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-790012" cluster and "default" namespace by default
	I1026 15:13:31.698014 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:32.198139 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:32.698359 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:33.197334 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:33.697439 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:34.198261 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:34.697736 1107827 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:13:34.769063 1107827 kubeadm.go:1113] duration metric: took 4.175520669s to wait for elevateKubeSystemPrivileges
	I1026 15:13:34.769102 1107827 kubeadm.go:402] duration metric: took 16.397307608s to StartCluster
	I1026 15:13:34.769127 1107827 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:34.769225 1107827 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:34.770585 1107827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:34.770908 1107827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:13:34.770943 1107827 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:34.770916 1107827 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:34.771042 1107827 addons.go:69] Setting default-storageclass=true in profile "auto-498531"
	I1026 15:13:34.771064 1107827 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-498531"
	I1026 15:13:34.771123 1107827 config.go:182] Loaded profile config "auto-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:34.771035 1107827 addons.go:69] Setting storage-provisioner=true in profile "auto-498531"
	I1026 15:13:34.771231 1107827 addons.go:238] Setting addon storage-provisioner=true in "auto-498531"
	I1026 15:13:34.771262 1107827 host.go:66] Checking if "auto-498531" exists ...
	I1026 15:13:34.771577 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:34.771772 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:34.776443 1107827 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:34.777765 1107827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:34.799695 1107827 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:34.799746 1107827 addons.go:238] Setting addon default-storageclass=true in "auto-498531"
	I1026 15:13:34.799799 1107827 host.go:66] Checking if "auto-498531" exists ...
	I1026 15:13:34.800417 1107827 cli_runner.go:164] Run: docker container inspect auto-498531 --format={{.State.Status}}
	I1026 15:13:34.801236 1107827 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:34.801257 1107827 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:34.801312 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:34.829194 1107827 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:34.829225 1107827 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:34.829294 1107827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-498531
	I1026 15:13:34.832250 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:34.854598 1107827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/auto-498531/id_rsa Username:docker}
	I1026 15:13:34.869079 1107827 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:13:34.927602 1107827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:34.952284 1107827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:34.970067 1107827 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:35.045061 1107827 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1026 15:13:35.046332 1107827 node_ready.go:35] waiting up to 15m0s for node "auto-498531" to be "Ready" ...
	I1026 15:13:35.313398 1107827 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:13:31.389560 1113766 out.go:252] * Restarting existing docker container for "embed-certs-535130" ...
	I1026 15:13:31.389635 1113766 cli_runner.go:164] Run: docker start embed-certs-535130
	I1026 15:13:31.660384 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:31.678708 1113766 kic.go:430] container "embed-certs-535130" state is running.
	I1026 15:13:31.679126 1113766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:13:31.697538 1113766 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/config.json ...
	I1026 15:13:31.697945 1113766 machine.go:93] provisionDockerMachine start ...
	I1026 15:13:31.698059 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:31.718923 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:31.719190 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:31.719210 1113766 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:13:31.720103 1113766 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48182->127.0.0.1:33862: read: connection reset by peer
	I1026 15:13:34.882840 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:13:34.882880 1113766 ubuntu.go:182] provisioning hostname "embed-certs-535130"
	I1026 15:13:34.882953 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:34.905938 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:34.906301 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:34.906322 1113766 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-535130 && echo "embed-certs-535130" | sudo tee /etc/hostname
	I1026 15:13:35.076003 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-535130
	
	I1026 15:13:35.076117 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:35.103101 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:35.103427 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:35.103450 1113766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-535130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-535130/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-535130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:13:35.256045 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:13:35.256087 1113766 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:13:35.256116 1113766 ubuntu.go:190] setting up certificates
	I1026 15:13:35.256132 1113766 provision.go:84] configureAuth start
	I1026 15:13:35.256217 1113766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:13:35.279777 1113766 provision.go:143] copyHostCerts
	I1026 15:13:35.279863 1113766 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:13:35.279881 1113766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:13:35.279958 1113766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:13:35.280106 1113766 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:13:35.280124 1113766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:13:35.280197 1113766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:13:35.280306 1113766 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:13:35.280314 1113766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:13:35.280352 1113766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:13:35.280449 1113766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.embed-certs-535130 san=[127.0.0.1 192.168.76.2 embed-certs-535130 localhost minikube]
	I1026 15:13:35.849277 1113766 provision.go:177] copyRemoteCerts
	I1026 15:13:35.849339 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:13:35.849383 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:35.868289 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:35.970503 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 15:13:35.989799 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:13:36.009306 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:13:36.028885 1113766 provision.go:87] duration metric: took 772.732042ms to configureAuth
	I1026 15:13:36.028916 1113766 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:13:36.029146 1113766 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:36.029314 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.049872 1113766 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:36.050196 1113766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33862 <nil> <nil>}
	I1026 15:13:36.050225 1113766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:13:35.314522 1107827 addons.go:514] duration metric: took 543.574864ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:13:35.550021 1107827 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-498531" context rescaled to 1 replicas
	I1026 15:13:36.367622 1113766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:13:36.367650 1113766 machine.go:96] duration metric: took 4.669682302s to provisionDockerMachine
	I1026 15:13:36.367675 1113766 start.go:293] postStartSetup for "embed-certs-535130" (driver="docker")
	I1026 15:13:36.367689 1113766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:13:36.367750 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:13:36.367797 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.388448 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.492995 1113766 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:13:36.496912 1113766 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:13:36.496990 1113766 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:13:36.497005 1113766 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:13:36.497441 1113766 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:13:36.497581 1113766 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:13:36.497738 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:13:36.506836 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:36.525298 1113766 start.go:296] duration metric: took 157.60468ms for postStartSetup
	I1026 15:13:36.525405 1113766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:13:36.525460 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.544951 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.644413 1113766 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:13:36.649341 1113766 fix.go:56] duration metric: took 5.281758238s for fixHost
	I1026 15:13:36.649370 1113766 start.go:83] releasing machines lock for "embed-certs-535130", held for 5.281812223s
	I1026 15:13:36.649447 1113766 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-535130
	I1026 15:13:36.667811 1113766 ssh_runner.go:195] Run: cat /version.json
	I1026 15:13:36.667869 1113766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:13:36.667877 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.667930 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:36.687798 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.688085 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:36.842869 1113766 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:36.849931 1113766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:13:36.885592 1113766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:13:36.890858 1113766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:13:36.890935 1113766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:13:36.899349 1113766 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:13:36.899377 1113766 start.go:495] detecting cgroup driver to use...
	I1026 15:13:36.899413 1113766 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:13:36.899462 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:13:36.915265 1113766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:13:36.928368 1113766 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:13:36.928419 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:13:36.943985 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:13:36.957590 1113766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:13:37.049991 1113766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:13:37.136299 1113766 docker.go:234] disabling docker service ...
	I1026 15:13:37.136360 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:13:37.151928 1113766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:13:37.165026 1113766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:13:37.251238 1113766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:13:37.336342 1113766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:13:37.348920 1113766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:13:37.365703 1113766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:13:37.365769 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.375313 1113766 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:13:37.375377 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.385150 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.394723 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.404588 1113766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:13:37.413415 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.423296 1113766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.432517 1113766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:37.441828 1113766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:13:37.449865 1113766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:13:37.457461 1113766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:37.547638 1113766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:37.664727 1113766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:37.664798 1113766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:37.668861 1113766 start.go:563] Will wait 60s for crictl version
	I1026 15:13:37.668919 1113766 ssh_runner.go:195] Run: which crictl
	I1026 15:13:37.672511 1113766 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:13:37.701474 1113766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:13:37.701556 1113766 ssh_runner.go:195] Run: crio --version
	I1026 15:13:37.731543 1113766 ssh_runner.go:195] Run: crio --version
	I1026 15:13:37.765561 1113766 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:13:33.556906 1114752 out.go:252] * Restarting existing docker container for "newest-cni-450976" ...
	I1026 15:13:33.556988 1114752 cli_runner.go:164] Run: docker start newest-cni-450976
	I1026 15:13:33.822470 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:33.842102 1114752 kic.go:430] container "newest-cni-450976" state is running.
	I1026 15:13:33.842808 1114752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450976
	I1026 15:13:33.863064 1114752 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/config.json ...
	I1026 15:13:33.863323 1114752 machine.go:93] provisionDockerMachine start ...
	I1026 15:13:33.863396 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:33.884364 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:33.884687 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:33.884704 1114752 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:13:33.885475 1114752 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35906->127.0.0.1:33867: read: connection reset by peer
	I1026 15:13:37.031343 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-450976
	
	I1026 15:13:37.031380 1114752 ubuntu.go:182] provisioning hostname "newest-cni-450976"
	I1026 15:13:37.031446 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.051564 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:37.051811 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:37.051826 1114752 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-450976 && echo "newest-cni-450976" | sudo tee /etc/hostname
	I1026 15:13:37.213333 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-450976
	
	I1026 15:13:37.213420 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.231946 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:37.232310 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:37.232342 1114752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-450976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-450976/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-450976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:13:37.380632 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:13:37.380665 1114752 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:13:37.380705 1114752 ubuntu.go:190] setting up certificates
	I1026 15:13:37.380727 1114752 provision.go:84] configureAuth start
	I1026 15:13:37.380796 1114752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450976
	I1026 15:13:37.399725 1114752 provision.go:143] copyHostCerts
	I1026 15:13:37.399829 1114752 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:13:37.399846 1114752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:13:37.399931 1114752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:13:37.400150 1114752 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:13:37.400182 1114752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:13:37.400227 1114752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:13:37.400369 1114752 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:13:37.400382 1114752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:13:37.400421 1114752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:13:37.400512 1114752 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.newest-cni-450976 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-450976]
	I1026 15:13:37.763701 1114752 provision.go:177] copyRemoteCerts
	I1026 15:13:37.763767 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:13:37.763819 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.783049 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:37.887525 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:13:37.906903 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:13:37.926587 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:13:37.944386 1114752 provision.go:87] duration metric: took 563.640766ms to configureAuth
	I1026 15:13:37.944414 1114752 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:13:37.944614 1114752 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:37.944731 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:37.964140 1114752 main.go:141] libmachine: Using SSH client type: native
	I1026 15:13:37.964409 1114752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33867 <nil> <nil>}
	I1026 15:13:37.964428 1114752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:13:38.255873 1114752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:13:38.255901 1114752 machine.go:96] duration metric: took 4.392559982s to provisionDockerMachine
	I1026 15:13:38.255917 1114752 start.go:293] postStartSetup for "newest-cni-450976" (driver="docker")
	I1026 15:13:38.255931 1114752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:13:38.256000 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:13:38.256055 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.275739 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:37.766706 1113766 cli_runner.go:164] Run: docker network inspect embed-certs-535130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:37.785593 1113766 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:37.789819 1113766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:37.800845 1113766 kubeadm.go:883] updating cluster {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:37.801020 1113766 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:37.801095 1113766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:37.834876 1113766 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:37.834902 1113766 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:37.834962 1113766 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:37.861255 1113766 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:37.861279 1113766 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:37.861322 1113766 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:13:37.861435 1113766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-535130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:37.861503 1113766 ssh_runner.go:195] Run: crio config
	I1026 15:13:37.912692 1113766 cni.go:84] Creating CNI manager for ""
	I1026 15:13:37.912714 1113766 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:37.912747 1113766 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:13:37.912784 1113766 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-535130 NodeName:embed-certs-535130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:37.912927 1113766 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-535130"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:37.913000 1113766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:13:37.921351 1113766 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:37.921430 1113766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:37.929571 1113766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:13:37.942588 1113766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:37.955865 1113766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:13:37.970133 1113766 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:37.974032 1113766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:37.985196 1113766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:38.073069 1113766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:38.095947 1113766 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130 for IP: 192.168.76.2
	I1026 15:13:38.095969 1113766 certs.go:195] generating shared ca certs ...
	I1026 15:13:38.095990 1113766 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.096157 1113766 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:13:38.096247 1113766 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:13:38.096263 1113766 certs.go:257] generating profile certs ...
	I1026 15:13:38.096402 1113766 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/client.key
	I1026 15:13:38.096505 1113766 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key.abe399f3
	I1026 15:13:38.096557 1113766 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key
	I1026 15:13:38.096790 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:13:38.096865 1113766 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:38.096882 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:13:38.096913 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:38.096948 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:38.096970 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:38.097027 1113766 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:38.097985 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:38.117316 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:13:38.141746 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:38.162963 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:13:38.188391 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:13:38.209813 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:13:38.228846 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:38.247538 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/embed-certs-535130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:13:38.267934 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:13:38.287253 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:13:38.306737 1113766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:38.325248 1113766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:38.338026 1113766 ssh_runner.go:195] Run: openssl version
	I1026 15:13:38.344312 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:13:38.353974 1113766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:13:38.358501 1113766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:13:38.358573 1113766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:13:38.395847 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:38.404522 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:13:38.414054 1113766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:13:38.418460 1113766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:13:38.418516 1113766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:13:38.454059 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:38.462770 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:38.471399 1113766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:38.475250 1113766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:38.475300 1113766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:38.510924 1113766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:38.519486 1113766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:38.523384 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:13:38.561625 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:13:38.601091 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:13:38.651816 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:13:38.697241 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:13:38.754098 1113766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:13:38.813908 1113766 kubeadm.go:400] StartCluster: {Name:embed-certs-535130 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-535130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:38.814039 1113766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:38.814105 1113766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:38.850213 1113766 cri.go:89] found id: "79f294b1af5377dbbe09bff36c0ce752c337fff26f468f52ba372eeae7c2fbd7"
	I1026 15:13:38.850237 1113766 cri.go:89] found id: "0cf664b8ea8fd4397a4e4d0903d086cb617b472ad1631050bc542a9e5c06ca09"
	I1026 15:13:38.850243 1113766 cri.go:89] found id: "43565d9e1913984f12b45a1203fca769c7b760ccf18830408972ff108c39b9bf"
	I1026 15:13:38.850248 1113766 cri.go:89] found id: "7f30d07b339ab7331f72cd45f5f34ee9c7eb82bec1197a77db9c34d2fcb6c24b"
	I1026 15:13:38.850252 1113766 cri.go:89] found id: ""
	I1026 15:13:38.850291 1113766 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:13:38.865515 1113766 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:38Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:38.865607 1113766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:13:38.878426 1113766 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:13:38.878485 1113766 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:13:38.878632 1113766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:13:38.890199 1113766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:13:38.891157 1113766 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-535130" does not appear in /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:38.891705 1113766 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-841519/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-535130" cluster setting kubeconfig missing "embed-certs-535130" context setting]
	I1026 15:13:38.892512 1113766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.894605 1113766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:13:38.904931 1113766 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1026 15:13:38.904968 1113766 kubeadm.go:601] duration metric: took 26.475851ms to restartPrimaryControlPlane
	I1026 15:13:38.904979 1113766 kubeadm.go:402] duration metric: took 91.083527ms to StartCluster
	I1026 15:13:38.904999 1113766 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.905074 1113766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:38.907087 1113766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:38.907395 1113766 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:38.907661 1113766 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:38.907720 1113766 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:38.907797 1113766 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-535130"
	I1026 15:13:38.907828 1113766 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-535130"
	W1026 15:13:38.907836 1113766 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:13:38.907864 1113766 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:13:38.908130 1113766 addons.go:69] Setting dashboard=true in profile "embed-certs-535130"
	I1026 15:13:38.908184 1113766 addons.go:238] Setting addon dashboard=true in "embed-certs-535130"
	W1026 15:13:38.908193 1113766 addons.go:247] addon dashboard should already be in state true
	I1026 15:13:38.908220 1113766 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:13:38.908373 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.908397 1113766 addons.go:69] Setting default-storageclass=true in profile "embed-certs-535130"
	I1026 15:13:38.908423 1113766 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-535130"
	I1026 15:13:38.908708 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.908740 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.910199 1113766 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:38.912350 1113766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:38.938355 1113766 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:13:38.940173 1113766 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:38.940965 1113766 addons.go:238] Setting addon default-storageclass=true in "embed-certs-535130"
	W1026 15:13:38.940987 1113766 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:13:38.941016 1113766 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:13:38.941438 1113766 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:13:38.376923 1114752 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:13:38.380788 1114752 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:13:38.380831 1114752 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:13:38.380846 1114752 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:13:38.380907 1114752 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:13:38.381022 1114752 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:13:38.381143 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:13:38.389543 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:38.409203 1114752 start.go:296] duration metric: took 153.266796ms for postStartSetup
	I1026 15:13:38.409313 1114752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:13:38.409379 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.429864 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:38.528464 1114752 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:13:38.533391 1114752 fix.go:56] duration metric: took 4.998404293s for fixHost
	I1026 15:13:38.533463 1114752 start.go:83] releasing machines lock for "newest-cni-450976", held for 4.998465392s
	I1026 15:13:38.533543 1114752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-450976
	I1026 15:13:38.553571 1114752 ssh_runner.go:195] Run: cat /version.json
	I1026 15:13:38.553643 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.553654 1114752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:13:38.553767 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:38.574284 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:38.574504 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:38.676051 1114752 ssh_runner.go:195] Run: systemctl --version
	I1026 15:13:38.758754 1114752 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:13:38.813141 1114752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:13:38.819297 1114752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:13:38.819357 1114752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:13:38.830001 1114752 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:13:38.830033 1114752 start.go:495] detecting cgroup driver to use...
	I1026 15:13:38.830069 1114752 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:13:38.830116 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:13:38.850256 1114752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:13:38.868194 1114752 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:13:38.868253 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:13:38.891101 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:13:38.910824 1114752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:13:39.062926 1114752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:13:39.179139 1114752 docker.go:234] disabling docker service ...
	I1026 15:13:39.179229 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:13:39.201712 1114752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:13:39.223985 1114752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:13:39.330872 1114752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:13:39.440045 1114752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:13:39.456995 1114752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:13:39.475173 1114752 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:13:39.475235 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.485838 1114752 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:13:39.485911 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.497890 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.509311 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.521401 1114752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:13:39.531708 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.545553 1114752 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.558867 1114752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:13:39.572132 1114752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:13:39.582550 1114752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:13:39.592870 1114752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:39.728450 1114752 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:39.862260 1114752 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:39.862332 1114752 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:39.867333 1114752 start.go:563] Will wait 60s for crictl version
	I1026 15:13:39.867406 1114752 ssh_runner.go:195] Run: which crictl
	I1026 15:13:39.872243 1114752 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:13:39.903804 1114752 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:13:39.903885 1114752 ssh_runner.go:195] Run: crio --version
	I1026 15:13:39.940255 1114752 ssh_runner.go:195] Run: crio --version
	I1026 15:13:39.980141 1114752 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:13:39.981402 1114752 cli_runner.go:164] Run: docker network inspect newest-cni-450976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:13:40.009905 1114752 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:40.014650 1114752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:40.027719 1114752 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:13:38.941506 1113766 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:38.941523 1113766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:38.941613 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:38.941532 1113766 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:13:38.942632 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:13:38.942653 1113766 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:13:38.942702 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:38.976439 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:38.978129 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:38.980802 1113766 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:38.980863 1113766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:38.981009 1113766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:13:39.014150 1113766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:13:39.104677 1113766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:39.122534 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:13:39.122561 1113766 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:13:39.123493 1113766 node_ready.go:35] waiting up to 6m0s for node "embed-certs-535130" to be "Ready" ...
	I1026 15:13:39.128461 1113766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:39.137116 1113766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:39.143559 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:13:39.143586 1113766 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:13:39.164258 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:13:39.164287 1113766 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:13:39.185403 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:13:39.185487 1113766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:13:39.204860 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:13:39.204885 1113766 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:13:39.231456 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:13:39.231485 1113766 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:13:39.247596 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:13:39.247622 1113766 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:13:39.268795 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:13:39.268827 1113766 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:13:39.285975 1113766 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:39.286003 1113766 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:13:39.300204 1113766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:40.480787 1113766 node_ready.go:49] node "embed-certs-535130" is "Ready"
	I1026 15:13:40.480821 1113766 node_ready.go:38] duration metric: took 1.357286103s for node "embed-certs-535130" to be "Ready" ...
	I1026 15:13:40.480838 1113766 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:13:40.480891 1113766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:13:41.063718 1113766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.926561303s)
	I1026 15:13:41.064077 1113766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.763825551s)
	I1026 15:13:41.064352 1113766 api_server.go:72] duration metric: took 2.156917709s to wait for apiserver process to appear ...
	I1026 15:13:41.064364 1113766 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:13:41.064384 1113766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:13:41.066580 1113766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.938084789s)
	I1026 15:13:41.068720 1113766 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-535130 addons enable metrics-server
	
	I1026 15:13:41.072450 1113766 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:13:41.072475 1113766 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:13:41.079563 1113766 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:13:41.081450 1113766 addons.go:514] duration metric: took 2.1737229s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:13:40.028927 1114752 kubeadm.go:883] updating cluster {Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:40.029111 1114752 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:40.029202 1114752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:40.066752 1114752 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:40.066779 1114752 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:40.066837 1114752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:40.095689 1114752 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:40.095711 1114752 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:40.095719 1114752 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1026 15:13:40.095834 1114752 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-450976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:40.095896 1114752 ssh_runner.go:195] Run: crio config
	I1026 15:13:40.174353 1114752 cni.go:84] Creating CNI manager for ""
	I1026 15:13:40.174388 1114752 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 15:13:40.174417 1114752 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:13:40.174447 1114752 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-450976 NodeName:newest-cni-450976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:40.174628 1114752 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-450976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:40.174714 1114752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:13:40.185063 1114752 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:40.185142 1114752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:40.193834 1114752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1026 15:13:40.207803 1114752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:40.221135 1114752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1026 15:13:40.235918 1114752 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:40.239959 1114752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:13:40.256497 1114752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:40.359653 1114752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:40.395140 1114752 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976 for IP: 192.168.103.2
	I1026 15:13:40.395205 1114752 certs.go:195] generating shared ca certs ...
	I1026 15:13:40.395229 1114752 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:40.395390 1114752 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:13:40.395438 1114752 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:13:40.395452 1114752 certs.go:257] generating profile certs ...
	I1026 15:13:40.395587 1114752 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/client.key
	I1026 15:13:40.395677 1114752 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/apiserver.key.6904aab9
	I1026 15:13:40.395726 1114752 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/proxy-client.key
	I1026 15:13:40.395894 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:13:40.395936 1114752 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:40.395950 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:13:40.395985 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:40.396018 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:40.396050 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:40.396105 1114752 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:13:40.396848 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:40.428740 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:13:40.467100 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:40.505682 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:13:40.537741 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:13:40.570121 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:13:40.595584 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:40.623177 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/newest-cni-450976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:13:40.644134 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:13:40.667283 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:13:40.688417 1114752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:40.708044 1114752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:40.721964 1114752 ssh_runner.go:195] Run: openssl version
	I1026 15:13:40.729493 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:40.740489 1114752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:40.745099 1114752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:40.745235 1114752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:40.783506 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:40.793310 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:13:40.803928 1114752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:13:40.808231 1114752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:13:40.808294 1114752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:13:40.855542 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:40.865852 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:13:40.876943 1114752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:13:40.881960 1114752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:13:40.882035 1114752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:13:40.930751 1114752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:40.941036 1114752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:40.946656 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:13:40.990223 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:13:41.044589 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:13:41.095556 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:13:41.150431 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:13:41.203945 1114752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:13:41.260135 1114752 kubeadm.go:400] StartCluster: {Name:newest-cni-450976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-450976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:41.260282 1114752 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:41.260381 1114752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:41.312408 1114752 cri.go:89] found id: "d301b19a9754fef9062ff0ab32cef39843a3b341f9c9c9c979ce50772e060f34"
	I1026 15:13:41.312495 1114752 cri.go:89] found id: "eca31c4960e5fee40ff7a27e80d78ba23e050229040a9c119c1a39d6d964c134"
	I1026 15:13:41.312506 1114752 cri.go:89] found id: "7b4821416cdb1f5a1c75031b5a1a9853efa078e8f2964c61061e443a8fe518d0"
	I1026 15:13:41.312512 1114752 cri.go:89] found id: "dad7b5a044afb9affbe248c4fce4bf89b73634fb0298fd50fe83199eecb4779f"
	I1026 15:13:41.312516 1114752 cri.go:89] found id: ""
	I1026 15:13:41.312586 1114752 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 15:13:41.328414 1114752 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:13:41Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:13:41.328490 1114752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:13:41.339143 1114752 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:13:41.339202 1114752 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:13:41.339274 1114752 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:13:41.349811 1114752 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:13:41.351328 1114752 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-450976" does not appear in /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:41.352331 1114752 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-841519/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-450976" cluster setting kubeconfig missing "newest-cni-450976" context setting]
	I1026 15:13:41.353686 1114752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:41.356482 1114752 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:13:41.368106 1114752 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1026 15:13:41.368228 1114752 kubeadm.go:601] duration metric: took 29.01603ms to restartPrimaryControlPlane
	I1026 15:13:41.368248 1114752 kubeadm.go:402] duration metric: took 108.140463ms to StartCluster
	I1026 15:13:41.368309 1114752 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:41.368403 1114752 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:13:41.371525 1114752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:41.371844 1114752 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:13:41.371893 1114752 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:13:41.371998 1114752 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-450976"
	I1026 15:13:41.372027 1114752 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-450976"
	W1026 15:13:41.372049 1114752 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:13:41.372062 1114752 addons.go:69] Setting dashboard=true in profile "newest-cni-450976"
	I1026 15:13:41.372077 1114752 addons.go:238] Setting addon dashboard=true in "newest-cni-450976"
	I1026 15:13:41.372081 1114752 host.go:66] Checking if "newest-cni-450976" exists ...
	W1026 15:13:41.372084 1114752 addons.go:247] addon dashboard should already be in state true
	I1026 15:13:41.372094 1114752 config.go:182] Loaded profile config "newest-cni-450976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:13:41.372107 1114752 host.go:66] Checking if "newest-cni-450976" exists ...
	I1026 15:13:41.372146 1114752 addons.go:69] Setting default-storageclass=true in profile "newest-cni-450976"
	I1026 15:13:41.372184 1114752 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-450976"
	I1026 15:13:41.372469 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.372627 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.372627 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.375710 1114752 out.go:179] * Verifying Kubernetes components...
	I1026 15:13:41.377073 1114752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:41.403083 1114752 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:13:41.403092 1114752 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:13:41.404381 1114752 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:41.404403 1114752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:13:41.404443 1114752 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1026 15:13:37.050036 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	W1026 15:13:39.051344 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	W1026 15:13:41.550747 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	I1026 15:13:41.404459 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:41.405303 1114752 addons.go:238] Setting addon default-storageclass=true in "newest-cni-450976"
	W1026 15:13:41.405323 1114752 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:13:41.405352 1114752 host.go:66] Checking if "newest-cni-450976" exists ...
	I1026 15:13:41.405598 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:13:41.405622 1114752 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:13:41.405701 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:41.405848 1114752 cli_runner.go:164] Run: docker container inspect newest-cni-450976 --format={{.State.Status}}
	I1026 15:13:41.434754 1114752 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:41.434778 1114752 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:13:41.435001 1114752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-450976
	I1026 15:13:41.440367 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:41.440381 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:41.468417 1114752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33867 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/newest-cni-450976/id_rsa Username:docker}
	I1026 15:13:41.570403 1114752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:41.626524 1114752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:13:41.637974 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:13:41.638018 1114752 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:13:41.638354 1114752 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:13:41.638413 1114752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:13:41.649500 1114752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:13:41.701715 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:13:41.701748 1114752 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:13:41.758574 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:13:41.758600 1114752 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:13:41.789831 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:13:41.789857 1114752 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:13:41.820110 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:13:41.820295 1114752 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:13:41.845585 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:13:41.846232 1114752 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:13:41.871736 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:13:41.871894 1114752 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:13:41.895283 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:13:41.895922 1114752 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:13:41.920762 1114752 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:41.920789 1114752 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:13:41.947088 1114752 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:13:43.537431 1114752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.910821538s)
	I1026 15:13:43.537498 1114752 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.899063191s)
	I1026 15:13:43.537512 1114752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.887990023s)
	I1026 15:13:43.537531 1114752 api_server.go:72] duration metric: took 2.165650284s to wait for apiserver process to appear ...
	I1026 15:13:43.537541 1114752 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:13:43.537564 1114752 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:13:43.537678 1114752 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.590549527s)
	I1026 15:13:43.539310 1114752 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-450976 addons enable metrics-server
	
	I1026 15:13:43.546753 1114752 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:13:43.546780 1114752 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:13:43.553499 1114752 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 15:13:43.554571 1114752 addons.go:514] duration metric: took 2.18268422s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 15:13:44.038396 1114752 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:13:44.045081 1114752 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:13:44.045119 1114752 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:13:44.537650 1114752 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:13:44.541643 1114752 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 15:13:44.542688 1114752 api_server.go:141] control plane version: v1.34.1
	I1026 15:13:44.542717 1114752 api_server.go:131] duration metric: took 1.005167152s to wait for apiserver health ...
	I1026 15:13:44.542729 1114752 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:13:44.546030 1114752 system_pods.go:59] 8 kube-system pods found
	I1026 15:13:44.546057 1114752 system_pods.go:61] "coredns-66bc5c9577-7jwrr" [c1acc555-e2da-4acf-ac6d-6818ea2173d5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:13:44.546064 1114752 system_pods.go:61] "etcd-newest-cni-450976" [5ee64166-247f-49ca-9212-b4c60c0152c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:13:44.546072 1114752 system_pods.go:61] "kindnet-9tqxv" [d6ade61f-e6fb-4746-9b65-ce10129cd53e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:13:44.546077 1114752 system_pods.go:61] "kube-apiserver-newest-cni-450976" [a2aa9446-3bbe-45c4-902b-07e7773290bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:13:44.546083 1114752 system_pods.go:61] "kube-controller-manager-newest-cni-450976" [0ae3b699-6a5a-41d6-b223-9f6858f990cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:13:44.546096 1114752 system_pods.go:61] "kube-proxy-jfm7b" [6e6c6e48-eb1f-4a31-9cf4-390096851e53] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:13:44.546102 1114752 system_pods.go:61] "kube-scheduler-newest-cni-450976" [8a2965f8-8545-46fd-bcf3-cc767c87b873] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:13:44.546107 1114752 system_pods.go:61] "storage-provisioner" [7182c30a-3cfc-49ba-b2d8-ee172f0272dd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 15:13:44.546115 1114752 system_pods.go:74] duration metric: took 3.379927ms to wait for pod list to return data ...
	I1026 15:13:44.546126 1114752 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:13:44.548446 1114752 default_sa.go:45] found service account: "default"
	I1026 15:13:44.548466 1114752 default_sa.go:55] duration metric: took 2.333903ms for default service account to be created ...
	I1026 15:13:44.548476 1114752 kubeadm.go:586] duration metric: took 3.176596107s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:13:44.548495 1114752 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:44.550942 1114752 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:13:44.550972 1114752 node_conditions.go:123] node cpu capacity is 8
	I1026 15:13:44.550988 1114752 node_conditions.go:105] duration metric: took 2.487701ms to run NodePressure ...
	I1026 15:13:44.551004 1114752 start.go:241] waiting for startup goroutines ...
	I1026 15:13:44.551016 1114752 start.go:246] waiting for cluster config update ...
	I1026 15:13:44.551030 1114752 start.go:255] writing updated cluster config ...
	I1026 15:13:44.551393 1114752 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:44.601145 1114752 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:13:44.603093 1114752 out.go:179] * Done! kubectl is now configured to use "newest-cni-450976" cluster and "default" namespace by default
	I1026 15:13:41.565077 1113766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:13:41.583478 1113766 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:13:41.583523 1113766 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:13:42.065228 1113766 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 15:13:42.070792 1113766 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 15:13:42.072076 1113766 api_server.go:141] control plane version: v1.34.1
	I1026 15:13:42.072113 1113766 api_server.go:131] duration metric: took 1.007740479s to wait for apiserver health ...
	I1026 15:13:42.072124 1113766 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:13:42.076256 1113766 system_pods.go:59] 8 kube-system pods found
	I1026 15:13:42.076293 1113766 system_pods.go:61] "coredns-66bc5c9577-pnbct" [5ed72083-0ec8-4686-be6f-962755eee655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:42.076305 1113766 system_pods.go:61] "etcd-embed-certs-535130" [5a890218-8e8c-4072-a89d-dec140b353f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:13:42.076313 1113766 system_pods.go:61] "kindnet-mlqjm" [526c1bc2-396a-4668-8248-d95483175948] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:13:42.076325 1113766 system_pods.go:61] "kube-apiserver-embed-certs-535130" [5e297bec-df61-4675-b6d7-1d5a67e0f3e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:13:42.076341 1113766 system_pods.go:61] "kube-controller-manager-embed-certs-535130" [de44f030-b276-41e4-9194-8ff5827569ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:13:42.076349 1113766 system_pods.go:61] "kube-proxy-nbr2d" [6afa7745-4329-4477-9744-1aa5b789adc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:13:42.076356 1113766 system_pods.go:61] "kube-scheduler-embed-certs-535130" [39891617-036e-4f05-a816-1b7418d2b3f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:13:42.076363 1113766 system_pods.go:61] "storage-provisioner" [ecac2fee-1c15-4fee-9ccd-cf42d0a041c3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:42.076372 1113766 system_pods.go:74] duration metric: took 4.239846ms to wait for pod list to return data ...
	I1026 15:13:42.076383 1113766 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:13:42.079484 1113766 default_sa.go:45] found service account: "default"
	I1026 15:13:42.079503 1113766 default_sa.go:55] duration metric: took 3.114492ms for default service account to be created ...
	I1026 15:13:42.079512 1113766 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:13:42.082915 1113766 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:42.082945 1113766 system_pods.go:89] "coredns-66bc5c9577-pnbct" [5ed72083-0ec8-4686-be6f-962755eee655] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:42.082955 1113766 system_pods.go:89] "etcd-embed-certs-535130" [5a890218-8e8c-4072-a89d-dec140b353f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:13:42.082967 1113766 system_pods.go:89] "kindnet-mlqjm" [526c1bc2-396a-4668-8248-d95483175948] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 15:13:42.082975 1113766 system_pods.go:89] "kube-apiserver-embed-certs-535130" [5e297bec-df61-4675-b6d7-1d5a67e0f3e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:13:42.082983 1113766 system_pods.go:89] "kube-controller-manager-embed-certs-535130" [de44f030-b276-41e4-9194-8ff5827569ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:13:42.082991 1113766 system_pods.go:89] "kube-proxy-nbr2d" [6afa7745-4329-4477-9744-1aa5b789adc6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:13:42.082998 1113766 system_pods.go:89] "kube-scheduler-embed-certs-535130" [39891617-036e-4f05-a816-1b7418d2b3f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:13:42.083006 1113766 system_pods.go:89] "storage-provisioner" [ecac2fee-1c15-4fee-9ccd-cf42d0a041c3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:42.083016 1113766 system_pods.go:126] duration metric: took 3.497362ms to wait for k8s-apps to be running ...
	I1026 15:13:42.083025 1113766 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:13:42.083073 1113766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:42.099508 1113766 system_svc.go:56] duration metric: took 16.470343ms WaitForService to wait for kubelet
	I1026 15:13:42.099540 1113766 kubeadm.go:586] duration metric: took 3.192109292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:42.099663 1113766 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:42.103491 1113766 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:13:42.103592 1113766 node_conditions.go:123] node cpu capacity is 8
	I1026 15:13:42.103633 1113766 node_conditions.go:105] duration metric: took 3.962936ms to run NodePressure ...
	I1026 15:13:42.103651 1113766 start.go:241] waiting for startup goroutines ...
	I1026 15:13:42.103660 1113766 start.go:246] waiting for cluster config update ...
	I1026 15:13:42.103674 1113766 start.go:255] writing updated cluster config ...
	I1026 15:13:42.104029 1113766 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:42.108545 1113766 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:42.117346 1113766 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pnbct" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:13:44.124646 1113766 pod_ready.go:104] pod "coredns-66bc5c9577-pnbct" is not "Ready", error: <nil>
	W1026 15:13:44.050572 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	W1026 15:13:46.050839 1107827 node_ready.go:57] node "auto-498531" has "Ready":"False" status (will retry)
	I1026 15:13:46.550212 1107827 node_ready.go:49] node "auto-498531" is "Ready"
	I1026 15:13:46.550250 1107827 node_ready.go:38] duration metric: took 11.50388973s for node "auto-498531" to be "Ready" ...
	I1026 15:13:46.550267 1107827 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:13:46.550338 1107827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:13:46.567335 1107827 api_server.go:72] duration metric: took 11.796271666s to wait for apiserver process to appear ...
	I1026 15:13:46.567364 1107827 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:13:46.567390 1107827 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1026 15:13:46.574323 1107827 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1026 15:13:46.575551 1107827 api_server.go:141] control plane version: v1.34.1
	I1026 15:13:46.575582 1107827 api_server.go:131] duration metric: took 8.210554ms to wait for apiserver health ...
	I1026 15:13:46.575591 1107827 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:13:46.580524 1107827 system_pods.go:59] 8 kube-system pods found
	I1026 15:13:46.580586 1107827 system_pods.go:61] "coredns-66bc5c9577-jcpvm" [3428a486-88ea-48c7-946a-da8234cd2419] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:46.580595 1107827 system_pods.go:61] "etcd-auto-498531" [e7da26c1-a68c-4d29-bae4-969cd67b1f70] Running
	I1026 15:13:46.580604 1107827 system_pods.go:61] "kindnet-6xblk" [37bd26e0-8e64-42ce-bd61-ea1b0d2df751] Running
	I1026 15:13:46.580609 1107827 system_pods.go:61] "kube-apiserver-auto-498531" [120a84c2-f356-4bfb-ac38-34928c794ae9] Running
	I1026 15:13:46.580614 1107827 system_pods.go:61] "kube-controller-manager-auto-498531" [1aa72b0f-39eb-458f-a064-7c52e3eddc39] Running
	I1026 15:13:46.580620 1107827 system_pods.go:61] "kube-proxy-2mhq8" [3154e799-1a38-427a-bacd-6d75a98a980e] Running
	I1026 15:13:46.580631 1107827 system_pods.go:61] "kube-scheduler-auto-498531" [bdc1d7a9-cef7-40ff-baaa-8275d69a735c] Running
	I1026 15:13:46.580640 1107827 system_pods.go:61] "storage-provisioner" [f0de56d9-f85f-4881-8a10-3f5c00767bd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:46.580652 1107827 system_pods.go:74] duration metric: took 5.05244ms to wait for pod list to return data ...
	I1026 15:13:46.580668 1107827 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:13:46.583503 1107827 default_sa.go:45] found service account: "default"
	I1026 15:13:46.583534 1107827 default_sa.go:55] duration metric: took 2.853462ms for default service account to be created ...
	I1026 15:13:46.583545 1107827 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:13:46.587809 1107827 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:46.587854 1107827 system_pods.go:89] "coredns-66bc5c9577-jcpvm" [3428a486-88ea-48c7-946a-da8234cd2419] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:46.587863 1107827 system_pods.go:89] "etcd-auto-498531" [e7da26c1-a68c-4d29-bae4-969cd67b1f70] Running
	I1026 15:13:46.587873 1107827 system_pods.go:89] "kindnet-6xblk" [37bd26e0-8e64-42ce-bd61-ea1b0d2df751] Running
	I1026 15:13:46.587879 1107827 system_pods.go:89] "kube-apiserver-auto-498531" [120a84c2-f356-4bfb-ac38-34928c794ae9] Running
	I1026 15:13:46.587885 1107827 system_pods.go:89] "kube-controller-manager-auto-498531" [1aa72b0f-39eb-458f-a064-7c52e3eddc39] Running
	I1026 15:13:46.587891 1107827 system_pods.go:89] "kube-proxy-2mhq8" [3154e799-1a38-427a-bacd-6d75a98a980e] Running
	I1026 15:13:46.587897 1107827 system_pods.go:89] "kube-scheduler-auto-498531" [bdc1d7a9-cef7-40ff-baaa-8275d69a735c] Running
	I1026 15:13:46.587904 1107827 system_pods.go:89] "storage-provisioner" [f0de56d9-f85f-4881-8a10-3f5c00767bd1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:13:46.587935 1107827 retry.go:31] will retry after 272.479432ms: missing components: kube-dns
	I1026 15:13:46.865697 1107827 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:46.865736 1107827 system_pods.go:89] "coredns-66bc5c9577-jcpvm" [3428a486-88ea-48c7-946a-da8234cd2419] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:46.865745 1107827 system_pods.go:89] "etcd-auto-498531" [e7da26c1-a68c-4d29-bae4-969cd67b1f70] Running
	I1026 15:13:46.865754 1107827 system_pods.go:89] "kindnet-6xblk" [37bd26e0-8e64-42ce-bd61-ea1b0d2df751] Running
	I1026 15:13:46.865759 1107827 system_pods.go:89] "kube-apiserver-auto-498531" [120a84c2-f356-4bfb-ac38-34928c794ae9] Running
	I1026 15:13:46.865767 1107827 system_pods.go:89] "kube-controller-manager-auto-498531" [1aa72b0f-39eb-458f-a064-7c52e3eddc39] Running
	I1026 15:13:46.865773 1107827 system_pods.go:89] "kube-proxy-2mhq8" [3154e799-1a38-427a-bacd-6d75a98a980e] Running
	I1026 15:13:46.865778 1107827 system_pods.go:89] "kube-scheduler-auto-498531" [bdc1d7a9-cef7-40ff-baaa-8275d69a735c] Running
	I1026 15:13:46.865783 1107827 system_pods.go:89] "storage-provisioner" [f0de56d9-f85f-4881-8a10-3f5c00767bd1] Running
	I1026 15:13:46.865802 1107827 retry.go:31] will retry after 252.326723ms: missing components: kube-dns
	I1026 15:13:47.122590 1107827 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:47.122631 1107827 system_pods.go:89] "coredns-66bc5c9577-jcpvm" [3428a486-88ea-48c7-946a-da8234cd2419] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:47.122640 1107827 system_pods.go:89] "etcd-auto-498531" [e7da26c1-a68c-4d29-bae4-969cd67b1f70] Running
	I1026 15:13:47.122650 1107827 system_pods.go:89] "kindnet-6xblk" [37bd26e0-8e64-42ce-bd61-ea1b0d2df751] Running
	I1026 15:13:47.122656 1107827 system_pods.go:89] "kube-apiserver-auto-498531" [120a84c2-f356-4bfb-ac38-34928c794ae9] Running
	I1026 15:13:47.122661 1107827 system_pods.go:89] "kube-controller-manager-auto-498531" [1aa72b0f-39eb-458f-a064-7c52e3eddc39] Running
	I1026 15:13:47.122667 1107827 system_pods.go:89] "kube-proxy-2mhq8" [3154e799-1a38-427a-bacd-6d75a98a980e] Running
	I1026 15:13:47.122672 1107827 system_pods.go:89] "kube-scheduler-auto-498531" [bdc1d7a9-cef7-40ff-baaa-8275d69a735c] Running
	I1026 15:13:47.122679 1107827 system_pods.go:89] "storage-provisioner" [f0de56d9-f85f-4881-8a10-3f5c00767bd1] Running
	I1026 15:13:47.122698 1107827 retry.go:31] will retry after 307.927123ms: missing components: kube-dns
	I1026 15:13:47.434438 1107827 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:47.434477 1107827 system_pods.go:89] "coredns-66bc5c9577-jcpvm" [3428a486-88ea-48c7-946a-da8234cd2419] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:13:47.434483 1107827 system_pods.go:89] "etcd-auto-498531" [e7da26c1-a68c-4d29-bae4-969cd67b1f70] Running
	I1026 15:13:47.434490 1107827 system_pods.go:89] "kindnet-6xblk" [37bd26e0-8e64-42ce-bd61-ea1b0d2df751] Running
	I1026 15:13:47.434494 1107827 system_pods.go:89] "kube-apiserver-auto-498531" [120a84c2-f356-4bfb-ac38-34928c794ae9] Running
	I1026 15:13:47.434497 1107827 system_pods.go:89] "kube-controller-manager-auto-498531" [1aa72b0f-39eb-458f-a064-7c52e3eddc39] Running
	I1026 15:13:47.434502 1107827 system_pods.go:89] "kube-proxy-2mhq8" [3154e799-1a38-427a-bacd-6d75a98a980e] Running
	I1026 15:13:47.434506 1107827 system_pods.go:89] "kube-scheduler-auto-498531" [bdc1d7a9-cef7-40ff-baaa-8275d69a735c] Running
	I1026 15:13:47.434509 1107827 system_pods.go:89] "storage-provisioner" [f0de56d9-f85f-4881-8a10-3f5c00767bd1] Running
	I1026 15:13:47.434524 1107827 retry.go:31] will retry after 394.776492ms: missing components: kube-dns
	I1026 15:13:47.834139 1107827 system_pods.go:86] 8 kube-system pods found
	I1026 15:13:47.834181 1107827 system_pods.go:89] "coredns-66bc5c9577-jcpvm" [3428a486-88ea-48c7-946a-da8234cd2419] Running
	I1026 15:13:47.834192 1107827 system_pods.go:89] "etcd-auto-498531" [e7da26c1-a68c-4d29-bae4-969cd67b1f70] Running
	I1026 15:13:47.834199 1107827 system_pods.go:89] "kindnet-6xblk" [37bd26e0-8e64-42ce-bd61-ea1b0d2df751] Running
	I1026 15:13:47.834205 1107827 system_pods.go:89] "kube-apiserver-auto-498531" [120a84c2-f356-4bfb-ac38-34928c794ae9] Running
	I1026 15:13:47.834211 1107827 system_pods.go:89] "kube-controller-manager-auto-498531" [1aa72b0f-39eb-458f-a064-7c52e3eddc39] Running
	I1026 15:13:47.834215 1107827 system_pods.go:89] "kube-proxy-2mhq8" [3154e799-1a38-427a-bacd-6d75a98a980e] Running
	I1026 15:13:47.834220 1107827 system_pods.go:89] "kube-scheduler-auto-498531" [bdc1d7a9-cef7-40ff-baaa-8275d69a735c] Running
	I1026 15:13:47.834225 1107827 system_pods.go:89] "storage-provisioner" [f0de56d9-f85f-4881-8a10-3f5c00767bd1] Running
	I1026 15:13:47.834235 1107827 system_pods.go:126] duration metric: took 1.250675765s to wait for k8s-apps to be running ...
	I1026 15:13:47.834253 1107827 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:13:47.834301 1107827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:13:47.849568 1107827 system_svc.go:56] duration metric: took 15.301856ms WaitForService to wait for kubelet
	I1026 15:13:47.849603 1107827 kubeadm.go:586] duration metric: took 13.078544335s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:13:47.849625 1107827 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:13:47.853193 1107827 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:13:47.853222 1107827 node_conditions.go:123] node cpu capacity is 8
	I1026 15:13:47.853239 1107827 node_conditions.go:105] duration metric: took 3.608348ms to run NodePressure ...
	I1026 15:13:47.853251 1107827 start.go:241] waiting for startup goroutines ...
	I1026 15:13:47.853257 1107827 start.go:246] waiting for cluster config update ...
	I1026 15:13:47.853267 1107827 start.go:255] writing updated cluster config ...
	I1026 15:13:47.853559 1107827 ssh_runner.go:195] Run: rm -f paused
	I1026 15:13:47.858465 1107827 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:47.863594 1107827 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jcpvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:47.869102 1107827 pod_ready.go:94] pod "coredns-66bc5c9577-jcpvm" is "Ready"
	I1026 15:13:47.869132 1107827 pod_ready.go:86] duration metric: took 5.506636ms for pod "coredns-66bc5c9577-jcpvm" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:47.871665 1107827 pod_ready.go:83] waiting for pod "etcd-auto-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:47.876332 1107827 pod_ready.go:94] pod "etcd-auto-498531" is "Ready"
	I1026 15:13:47.876357 1107827 pod_ready.go:86] duration metric: took 4.669466ms for pod "etcd-auto-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:47.878642 1107827 pod_ready.go:83] waiting for pod "kube-apiserver-auto-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:47.883942 1107827 pod_ready.go:94] pod "kube-apiserver-auto-498531" is "Ready"
	I1026 15:13:47.883972 1107827 pod_ready.go:86] duration metric: took 5.309129ms for pod "kube-apiserver-auto-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:47.886472 1107827 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:48.264103 1107827 pod_ready.go:94] pod "kube-controller-manager-auto-498531" is "Ready"
	I1026 15:13:48.264140 1107827 pod_ready.go:86] duration metric: took 377.639417ms for pod "kube-controller-manager-auto-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:48.464032 1107827 pod_ready.go:83] waiting for pod "kube-proxy-2mhq8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:48.865504 1107827 pod_ready.go:94] pod "kube-proxy-2mhq8" is "Ready"
	I1026 15:13:48.865535 1107827 pod_ready.go:86] duration metric: took 401.468941ms for pod "kube-proxy-2mhq8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:49.064441 1107827 pod_ready.go:83] waiting for pod "kube-scheduler-auto-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:49.464482 1107827 pod_ready.go:94] pod "kube-scheduler-auto-498531" is "Ready"
	I1026 15:13:49.464517 1107827 pod_ready.go:86] duration metric: took 400.045311ms for pod "kube-scheduler-auto-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:13:49.464532 1107827 pod_ready.go:40] duration metric: took 1.606027952s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:13:49.535983 1107827 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:13:49.538296 1107827 out.go:179] * Done! kubectl is now configured to use "auto-498531" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.828146276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.830979782Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7bfd36c4-6ce0-47e4-bc4b-e8b5763bbf06 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.831835582Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a6f736c3-45da-4bc2-9608-fc5943c4435a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.832796517Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.833289016Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.833767325Z" level=info msg="Ran pod sandbox 6a13811aad91b61b77733094a452623be770779c53983d308398d24b6ca27333 with infra container: kube-system/kindnet-9tqxv/POD" id=7bfd36c4-6ce0-47e4-bc4b-e8b5763bbf06 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.833974462Z" level=info msg="Ran pod sandbox c5bd8a9276e64d520549c0ec08d3b8735d232153d0020848bc173f4a22e52107 with infra container: kube-system/kube-proxy-jfm7b/POD" id=a6f736c3-45da-4bc2-9608-fc5943c4435a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.835277368Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=13b3a378-6997-4f01-807f-acb2f4105568 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.835330245Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=282ec152-f0dd-422d-84eb-229f59a4fd8a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.836383551Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=23c06f72-980d-4523-96c9-8d7ee5af8027 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.836413557Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=21ff233c-06ba-4248-b457-5d4f2266a703 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.837634887Z" level=info msg="Creating container: kube-system/kube-proxy-jfm7b/kube-proxy" id=03318cee-5c38-4554-bd2b-0d9e0aa6de76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.837662428Z" level=info msg="Creating container: kube-system/kindnet-9tqxv/kindnet-cni" id=6284997d-5769-4371-a5e0-6b08ff48ab71 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.837757578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.837767342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.843450987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.84450578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.846388751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.846780337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.872280019Z" level=info msg="Created container 28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80: kube-system/kindnet-9tqxv/kindnet-cni" id=6284997d-5769-4371-a5e0-6b08ff48ab71 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.872964192Z" level=info msg="Starting container: 28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80" id=ebfef72f-d2ae-4526-822b-84855d1ff1ef name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.875126497Z" level=info msg="Started container" PID=1034 containerID=28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80 description=kube-system/kindnet-9tqxv/kindnet-cni id=ebfef72f-d2ae-4526-822b-84855d1ff1ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=6a13811aad91b61b77733094a452623be770779c53983d308398d24b6ca27333
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.8757217Z" level=info msg="Created container 4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9: kube-system/kube-proxy-jfm7b/kube-proxy" id=03318cee-5c38-4554-bd2b-0d9e0aa6de76 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.876513602Z" level=info msg="Starting container: 4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9" id=3058dddb-bfac-4768-9ff3-c9e117a05c74 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:13:43 newest-cni-450976 crio[519]: time="2025-10-26T15:13:43.87945866Z" level=info msg="Started container" PID=1035 containerID=4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9 description=kube-system/kube-proxy-jfm7b/kube-proxy id=3058dddb-bfac-4768-9ff3-c9e117a05c74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c5bd8a9276e64d520549c0ec08d3b8735d232153d0020848bc173f4a22e52107
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4067fc481bc4b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   7 seconds ago       Running             kube-proxy                1                   c5bd8a9276e64       kube-proxy-jfm7b                            kube-system
	28e4049021789       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 seconds ago       Running             kindnet-cni               1                   6a13811aad91b       kindnet-9tqxv                               kube-system
	d301b19a9754f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   10 seconds ago      Running             kube-controller-manager   1                   e377ee3bf177b       kube-controller-manager-newest-cni-450976   kube-system
	eca31c4960e5f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   10 seconds ago      Running             etcd                      1                   2a22a8c561c92       etcd-newest-cni-450976                      kube-system
	7b4821416cdb1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   10 seconds ago      Running             kube-scheduler            1                   310c3819897f1       kube-scheduler-newest-cni-450976            kube-system
	dad7b5a044afb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   10 seconds ago      Running             kube-apiserver            1                   b652e82ca92ce       kube-apiserver-newest-cni-450976            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-450976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-450976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=newest-cni-450976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_13_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:13:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-450976
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:13:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:13:43 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:13:43 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:13:43 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 15:13:43 +0000   Sun, 26 Oct 2025 15:13:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-450976
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                1575f574-b7cf-4d6a-9ab9-f0fb8538a042
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-450976                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-9tqxv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-450976             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-450976    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-jfm7b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-450976             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 7s    kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s   kubelet          Node newest-cni-450976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s   kubelet          Node newest-cni-450976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s   kubelet          Node newest-cni-450976 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node newest-cni-450976 event: Registered Node newest-cni-450976 in Controller
	  Normal  RegisteredNode           5s    node-controller  Node newest-cni-450976 event: Registered Node newest-cni-450976 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [eca31c4960e5fee40ff7a27e80d78ba23e050229040a9c119c1a39d6d964c134] <==
	{"level":"warn","ts":"2025-10-26T15:13:42.131856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.149079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.157300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.166648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.176362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.184684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.192294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.201492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.210373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.219227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.227332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.237284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.247658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.267813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.278855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.291185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.306279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.317277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.324021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.333320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.341640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.357354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.366053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.376454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:42.443284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:13:51 up  2:56,  0 user,  load average: 4.10, 2.87, 1.88
	Linux newest-cni-450976 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [28e4049021789d7b497ba2bfd04b269e3b3c2807c7507dd9f483593309c84b80] <==
	I1026 15:13:44.037840       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:13:44.133039       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 15:13:44.133381       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:13:44.133401       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:13:44.133450       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:13:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:13:44.337210       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:13:44.337388       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:13:44.337408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:13:44.337557       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:13:44.638246       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:13:44.638275       1 metrics.go:72] Registering metrics
	I1026 15:13:44.638355       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [dad7b5a044afb9affbe248c4fce4bf89b73634fb0298fd50fe83199eecb4779f] <==
	I1026 15:13:42.981324       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:13:42.981343       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:13:42.981349       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:13:42.981358       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:13:42.981466       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:13:42.981475       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:13:42.981482       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:13:42.981527       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:13:42.981583       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:13:42.981899       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:13:42.984662       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:13:42.988889       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:13:43.003561       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:13:43.300093       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:13:43.335564       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:13:43.362268       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:13:43.371565       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:13:43.379719       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:13:43.417006       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.185.174"}
	I1026 15:13:43.429825       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.172.68"}
	I1026 15:13:43.890024       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:13:46.556387       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:13:46.654904       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:13:46.805606       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:13:46.805606       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [d301b19a9754fef9062ff0ab32cef39843a3b341f9c9c9c979ce50772e060f34] <==
	I1026 15:13:46.314235       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 15:13:46.314341       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 15:13:46.315526       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 15:13:46.315548       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 15:13:46.318800       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 15:13:46.321183       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 15:13:46.324458       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:13:46.328811       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:13:46.331120       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:13:46.331124       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:13:46.335333       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:13:46.335585       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:13:46.335707       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-450976"
	I1026 15:13:46.335778       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 15:13:46.337176       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:13:46.339215       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 15:13:46.341462       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 15:13:46.342907       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:13:46.351043       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:13:46.351062       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:13:46.351070       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:13:46.351388       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:13:46.352260       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:13:46.352516       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:13:46.359607       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [4067fc481bc4baf4606a9f82937e71103371389384e0fc32bb2fb41a915456e9] <==
	I1026 15:13:43.930656       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:13:44.002646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:13:44.103580       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:13:44.103626       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 15:13:44.103751       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:13:44.126070       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:13:44.126145       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:13:44.131651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:13:44.132983       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:13:44.133099       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:44.136538       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:13:44.136563       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:13:44.136599       1 config.go:200] "Starting service config controller"
	I1026 15:13:44.136618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:13:44.136633       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:13:44.136648       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:13:44.136803       1 config.go:309] "Starting node config controller"
	I1026 15:13:44.136904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:13:44.136939       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:13:44.236914       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:13:44.236971       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:13:44.236965       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7b4821416cdb1f5a1c75031b5a1a9853efa078e8f2964c61061e443a8fe518d0] <==
	I1026 15:13:41.459797       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:13:42.912494       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:13:42.912532       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:13:42.912545       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:13:42.912554       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:13:42.956801       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:13:42.956835       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:42.959117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:13:42.959177       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:13:42.960861       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:13:42.960988       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:13:43.059810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.024679     659 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.024783     659 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.024813     659 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.025730     659 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.051290     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-450976\" already exists" pod="kube-system/kube-scheduler-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.051323     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.060438     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-450976\" already exists" pod="kube-system/etcd-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.060476     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.067598     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-450976\" already exists" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.067642     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.074148     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-450976\" already exists" pod="kube-system/kube-controller-manager-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.399883     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.407530     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-450976\" already exists" pod="kube-system/kube-controller-manager-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.519202     659 apiserver.go:52] "Watching apiserver"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.522931     659 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527456     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e6c6e48-eb1f-4a31-9cf4-390096851e53-lib-modules\") pod \"kube-proxy-jfm7b\" (UID: \"6e6c6e48-eb1f-4a31-9cf4-390096851e53\") " pod="kube-system/kube-proxy-jfm7b"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527522     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6ade61f-e6fb-4746-9b65-ce10129cd53e-xtables-lock\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527618     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d6ade61f-e6fb-4746-9b65-ce10129cd53e-cni-cfg\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527653     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6ade61f-e6fb-4746-9b65-ce10129cd53e-lib-modules\") pod \"kindnet-9tqxv\" (UID: \"d6ade61f-e6fb-4746-9b65-ce10129cd53e\") " pod="kube-system/kindnet-9tqxv"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.527725     659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e6c6e48-eb1f-4a31-9cf4-390096851e53-xtables-lock\") pod \"kube-proxy-jfm7b\" (UID: \"6e6c6e48-eb1f-4a31-9cf4-390096851e53\") " pod="kube-system/kube-proxy-jfm7b"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: I1026 15:13:43.643033     659 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:43 newest-cni-450976 kubelet[659]: E1026 15:13:43.651043     659 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-450976\" already exists" pod="kube-system/kube-apiserver-newest-cni-450976"
	Oct 26 15:13:45 newest-cni-450976 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:13:45 newest-cni-450976 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:13:45 newest-cni-450976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-450976 -n newest-cni-450976
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-450976 -n newest-cni-450976: exit status 2 (361.490011ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-450976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-7jwrr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nfgb7 kubernetes-dashboard-855c9754f9-ztb74
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-450976 describe pod coredns-66bc5c9577-7jwrr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nfgb7 kubernetes-dashboard-855c9754f9-ztb74
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-450976 describe pod coredns-66bc5c9577-7jwrr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nfgb7 kubernetes-dashboard-855c9754f9-ztb74: exit status 1 (64.116912ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-7jwrr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nfgb7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-ztb74" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-450976 describe pod coredns-66bc5c9577-7jwrr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nfgb7 kubernetes-dashboard-855c9754f9-ztb74: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-535130 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-535130 --alsologtostderr -v=1: exit status 80 (1.726041496s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-535130 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:14:36.848555 1134112 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:14:36.848849 1134112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:36.848859 1134112 out.go:374] Setting ErrFile to fd 2...
	I1026 15:14:36.848864 1134112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:36.849061 1134112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:14:36.849299 1134112 out.go:368] Setting JSON to false
	I1026 15:14:36.849353 1134112 mustload.go:65] Loading cluster: embed-certs-535130
	I1026 15:14:36.849679 1134112 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:36.850117 1134112 cli_runner.go:164] Run: docker container inspect embed-certs-535130 --format={{.State.Status}}
	I1026 15:14:36.870647 1134112 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:14:36.870991 1134112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:36.931400 1134112 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-26 15:14:36.919981664 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:14:36.932023 1134112 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-535130 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:14:36.933936 1134112 out.go:179] * Pausing node embed-certs-535130 ... 
	I1026 15:14:36.935024 1134112 host.go:66] Checking if "embed-certs-535130" exists ...
	I1026 15:14:36.935331 1134112 ssh_runner.go:195] Run: systemctl --version
	I1026 15:14:36.935384 1134112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-535130
	I1026 15:14:36.954033 1134112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33862 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/embed-certs-535130/id_rsa Username:docker}
	I1026 15:14:37.054682 1134112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:37.094196 1134112 pause.go:52] kubelet running: true
	I1026 15:14:37.094272 1134112 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:14:37.303540 1134112 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:14:37.303676 1134112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:14:37.390616 1134112 cri.go:89] found id: "239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7"
	I1026 15:14:37.390649 1134112 cri.go:89] found id: "0e893e41892fa12c7ec68b76a502b7a243a84d94912ec68bf8757235766702b0"
	I1026 15:14:37.390656 1134112 cri.go:89] found id: "4cd2c8e35ef08093cad19d86eb698b67b7f3efc33cc6e0f1b1f9e57148715d1d"
	I1026 15:14:37.390660 1134112 cri.go:89] found id: "5e1d1087d88f63dfa08475c5c3d49f7e0a5ce8b0ccdf279101ffe4c56c135534"
	I1026 15:14:37.390677 1134112 cri.go:89] found id: "fd864c01850c3a39fcff70d2a1c10ffa508c1d4673cb99b9ac1d5cb6d772026e"
	I1026 15:14:37.390680 1134112 cri.go:89] found id: "79f294b1af5377dbbe09bff36c0ce752c337fff26f468f52ba372eeae7c2fbd7"
	I1026 15:14:37.390683 1134112 cri.go:89] found id: "0cf664b8ea8fd4397a4e4d0903d086cb617b472ad1631050bc542a9e5c06ca09"
	I1026 15:14:37.390698 1134112 cri.go:89] found id: "43565d9e1913984f12b45a1203fca769c7b760ccf18830408972ff108c39b9bf"
	I1026 15:14:37.390705 1134112 cri.go:89] found id: "7f30d07b339ab7331f72cd45f5f34ee9c7eb82bec1197a77db9c34d2fcb6c24b"
	I1026 15:14:37.390724 1134112 cri.go:89] found id: "3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	I1026 15:14:37.390732 1134112 cri.go:89] found id: "c2406044b7f315c5b1ee3f4019f3a406d40d7ef84f78714460b6156504465324"
	I1026 15:14:37.390736 1134112 cri.go:89] found id: ""
	I1026 15:14:37.390798 1134112 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:14:37.405615 1134112 retry.go:31] will retry after 148.544231ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:14:37Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:14:37.554993 1134112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:37.570623 1134112 pause.go:52] kubelet running: false
	I1026 15:14:37.570730 1134112 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:14:37.755285 1134112 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:14:37.755382 1134112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:14:37.839477 1134112 cri.go:89] found id: "239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7"
	I1026 15:14:37.839509 1134112 cri.go:89] found id: "0e893e41892fa12c7ec68b76a502b7a243a84d94912ec68bf8757235766702b0"
	I1026 15:14:37.839515 1134112 cri.go:89] found id: "4cd2c8e35ef08093cad19d86eb698b67b7f3efc33cc6e0f1b1f9e57148715d1d"
	I1026 15:14:37.839521 1134112 cri.go:89] found id: "5e1d1087d88f63dfa08475c5c3d49f7e0a5ce8b0ccdf279101ffe4c56c135534"
	I1026 15:14:37.839525 1134112 cri.go:89] found id: "fd864c01850c3a39fcff70d2a1c10ffa508c1d4673cb99b9ac1d5cb6d772026e"
	I1026 15:14:37.839531 1134112 cri.go:89] found id: "79f294b1af5377dbbe09bff36c0ce752c337fff26f468f52ba372eeae7c2fbd7"
	I1026 15:14:37.839534 1134112 cri.go:89] found id: "0cf664b8ea8fd4397a4e4d0903d086cb617b472ad1631050bc542a9e5c06ca09"
	I1026 15:14:37.839537 1134112 cri.go:89] found id: "43565d9e1913984f12b45a1203fca769c7b760ccf18830408972ff108c39b9bf"
	I1026 15:14:37.839539 1134112 cri.go:89] found id: "7f30d07b339ab7331f72cd45f5f34ee9c7eb82bec1197a77db9c34d2fcb6c24b"
	I1026 15:14:37.839547 1134112 cri.go:89] found id: "3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	I1026 15:14:37.839552 1134112 cri.go:89] found id: "c2406044b7f315c5b1ee3f4019f3a406d40d7ef84f78714460b6156504465324"
	I1026 15:14:37.839556 1134112 cri.go:89] found id: ""
	I1026 15:14:37.839610 1134112 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:14:37.853412 1134112 retry.go:31] will retry after 343.593922ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:14:37Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:14:38.198055 1134112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:38.216481 1134112 pause.go:52] kubelet running: false
	I1026 15:14:38.216546 1134112 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:14:38.396250 1134112 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:14:38.396366 1134112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:14:38.480852 1134112 cri.go:89] found id: "239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7"
	I1026 15:14:38.480881 1134112 cri.go:89] found id: "0e893e41892fa12c7ec68b76a502b7a243a84d94912ec68bf8757235766702b0"
	I1026 15:14:38.480887 1134112 cri.go:89] found id: "4cd2c8e35ef08093cad19d86eb698b67b7f3efc33cc6e0f1b1f9e57148715d1d"
	I1026 15:14:38.480893 1134112 cri.go:89] found id: "5e1d1087d88f63dfa08475c5c3d49f7e0a5ce8b0ccdf279101ffe4c56c135534"
	I1026 15:14:38.480897 1134112 cri.go:89] found id: "fd864c01850c3a39fcff70d2a1c10ffa508c1d4673cb99b9ac1d5cb6d772026e"
	I1026 15:14:38.480902 1134112 cri.go:89] found id: "79f294b1af5377dbbe09bff36c0ce752c337fff26f468f52ba372eeae7c2fbd7"
	I1026 15:14:38.480906 1134112 cri.go:89] found id: "0cf664b8ea8fd4397a4e4d0903d086cb617b472ad1631050bc542a9e5c06ca09"
	I1026 15:14:38.480909 1134112 cri.go:89] found id: "43565d9e1913984f12b45a1203fca769c7b760ccf18830408972ff108c39b9bf"
	I1026 15:14:38.480913 1134112 cri.go:89] found id: "7f30d07b339ab7331f72cd45f5f34ee9c7eb82bec1197a77db9c34d2fcb6c24b"
	I1026 15:14:38.480929 1134112 cri.go:89] found id: "3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	I1026 15:14:38.480937 1134112 cri.go:89] found id: "c2406044b7f315c5b1ee3f4019f3a406d40d7ef84f78714460b6156504465324"
	I1026 15:14:38.480941 1134112 cri.go:89] found id: ""
	I1026 15:14:38.480995 1134112 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:14:38.498295 1134112 out.go:203] 
	W1026 15:14:38.499954 1134112 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:14:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:14:38.499980 1134112 out.go:285] * 
	* 
	W1026 15:14:38.508037 1134112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:14:38.509616 1134112 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-535130 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-535130
helpers_test.go:243: (dbg) docker inspect embed-certs-535130:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36",
	        "Created": "2025-10-26T15:12:28.122091236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1114009,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:13:31.415539146Z",
	            "FinishedAt": "2025-10-26T15:13:30.484008333Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/hostname",
	        "HostsPath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/hosts",
	        "LogPath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36-json.log",
	        "Name": "/embed-certs-535130",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-535130:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-535130",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36",
	                "LowerDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-535130",
	                "Source": "/var/lib/docker/volumes/embed-certs-535130/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-535130",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-535130",
	                "name.minikube.sigs.k8s.io": "embed-certs-535130",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "204653b8b321268d0c8cc60442bc19a90dc557b4c2a7b883efb8af5e6b54170a",
	            "SandboxKey": "/var/run/docker/netns/204653b8b321",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33862"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-535130": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:3e:65:d6:7b:90",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c696734ed668df0fca3efb0f7c1c0265275f09b80d9a59f85ab28b09787295d5",
	                    "EndpointID": "0f7fefc0af864babc78ea885345a53079d24f7387f6cc53b0aa5025d9fde6a38",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-535130",
	                        "51b1644009af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535130 -n embed-certs-535130
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535130 -n embed-certs-535130: exit status 2 (385.784135ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-535130 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-535130 logs -n 25: (1.235565816s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-498531 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo systemctl cat docker --no-pager                                                                                    │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /etc/docker/daemon.json                                                                                        │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo docker system info                                                                                                 │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cri-dockerd --version                                                                                              │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo systemctl cat containerd --no-pager                                                                                │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /etc/containerd/config.toml                                                                                    │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo containerd config dump                                                                                             │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo systemctl cat crio --no-pager                                                                                      │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo crio config                                                                                                        │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ delete  │ -p auto-498531                                                                                                                         │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ start   │ -p calico-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-498531      │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ image   │ embed-certs-535130 image list --format=json                                                                                            │ embed-certs-535130 │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p embed-certs-535130 --alsologtostderr -v=1                                                                                           │ embed-certs-535130 │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:14:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:14:22.027033 1131084 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:14:22.027189 1131084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:22.027197 1131084 out.go:374] Setting ErrFile to fd 2...
	I1026 15:14:22.027203 1131084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:22.027481 1131084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:14:22.028142 1131084 out.go:368] Setting JSON to false
	I1026 15:14:22.030037 1131084 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10610,"bootTime":1761481052,"procs":408,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:14:22.030199 1131084 start.go:141] virtualization: kvm guest
	I1026 15:14:22.033654 1131084 out.go:179] * [calico-498531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:14:22.035469 1131084 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:14:22.035521 1131084 notify.go:220] Checking for updates...
	I1026 15:14:22.038257 1131084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:14:22.039696 1131084 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:14:22.041116 1131084 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:14:22.045858 1131084 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:14:22.047496 1131084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:14:22.049256 1131084 config.go:182] Loaded profile config "default-k8s-diff-port-790012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:22.049393 1131084 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:22.049518 1131084 config.go:182] Loaded profile config "kindnet-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:22.049761 1131084 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:14:22.082087 1131084 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:14:22.082232 1131084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:22.158702 1131084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:14:22.145242478 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:14:22.158821 1131084 docker.go:318] overlay module found
	I1026 15:14:22.160896 1131084 out.go:179] * Using the docker driver based on user configuration
	I1026 15:14:21.082714 1122250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:21.582393 1122250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:22.082722 1122250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:22.172853 1122250 kubeadm.go:1113] duration metric: took 4.715500659s to wait for elevateKubeSystemPrivileges
	I1026 15:14:22.172889 1122250 kubeadm.go:402] duration metric: took 16.379310994s to StartCluster
	I1026 15:14:22.172911 1122250 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:22.172985 1122250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:14:22.175304 1122250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:22.175585 1122250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:14:22.175587 1122250 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:14:22.175698 1122250 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:14:22.175805 1122250 config.go:182] Loaded profile config "kindnet-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:22.175815 1122250 addons.go:69] Setting default-storageclass=true in profile "kindnet-498531"
	I1026 15:14:22.175832 1122250 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-498531"
	I1026 15:14:22.175807 1122250 addons.go:69] Setting storage-provisioner=true in profile "kindnet-498531"
	I1026 15:14:22.175869 1122250 addons.go:238] Setting addon storage-provisioner=true in "kindnet-498531"
	I1026 15:14:22.175897 1122250 host.go:66] Checking if "kindnet-498531" exists ...
	I1026 15:14:22.176275 1122250 cli_runner.go:164] Run: docker container inspect kindnet-498531 --format={{.State.Status}}
	I1026 15:14:22.176713 1122250 cli_runner.go:164] Run: docker container inspect kindnet-498531 --format={{.State.Status}}
	I1026 15:14:22.178384 1122250 out.go:179] * Verifying Kubernetes components...
	I1026 15:14:22.179747 1122250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:22.225875 1122250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:14:22.164829 1131084 start.go:305] selected driver: docker
	I1026 15:14:22.164855 1131084 start.go:925] validating driver "docker" against <nil>
	I1026 15:14:22.164873 1131084 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:14:22.165618 1131084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:22.283220 1131084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-26 15:14:22.261984462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:14:22.283686 1131084 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:14:22.284045 1131084 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:14:22.286031 1131084 out.go:179] * Using Docker driver with root privileges
	I1026 15:14:22.287355 1131084 cni.go:84] Creating CNI manager for "calico"
	I1026 15:14:22.287381 1131084 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1026 15:14:22.287541 1131084 start.go:349] cluster config:
	{Name:calico-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:14:22.289718 1131084 out.go:179] * Starting "calico-498531" primary control-plane node in "calico-498531" cluster
	I1026 15:14:22.291056 1131084 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:14:22.293520 1131084 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:14:22.295704 1131084 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:22.295766 1131084 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:14:22.295776 1131084 cache.go:58] Caching tarball of preloaded images
	I1026 15:14:22.295848 1131084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:14:22.295922 1131084 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:14:22.295935 1131084 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:14:22.296068 1131084 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/config.json ...
	I1026 15:14:22.296094 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/config.json: {Name:mk608ef37dee609688bd00cb752182a38a72f55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:22.322410 1131084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:14:22.322446 1131084 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:14:22.322469 1131084 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:14:22.322508 1131084 start.go:360] acquireMachinesLock for calico-498531: {Name:mkad5fbf5f1a91b92ec641cca7eb150eb880ccbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:14:22.322626 1131084 start.go:364] duration metric: took 94.47µs to acquireMachinesLock for "calico-498531"
	I1026 15:14:22.322657 1131084 start.go:93] Provisioning new machine with config: &{Name:calico-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:14:22.322774 1131084 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:14:22.226245 1122250 addons.go:238] Setting addon default-storageclass=true in "kindnet-498531"
	I1026 15:14:22.226315 1122250 host.go:66] Checking if "kindnet-498531" exists ...
	I1026 15:14:22.226830 1122250 cli_runner.go:164] Run: docker container inspect kindnet-498531 --format={{.State.Status}}
	I1026 15:14:22.228152 1122250 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:14:22.228288 1122250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:14:22.229229 1122250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-498531
	I1026 15:14:22.266297 1122250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33872 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/kindnet-498531/id_rsa Username:docker}
	I1026 15:14:22.269327 1122250 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:14:22.269464 1122250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:14:22.269588 1122250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-498531
	I1026 15:14:22.295754 1122250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33872 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/kindnet-498531/id_rsa Username:docker}
	I1026 15:14:22.314656 1122250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:14:22.376057 1122250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:14:22.408442 1122250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:14:22.436007 1122250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:14:22.559507 1122250 node_ready.go:35] waiting up to 15m0s for node "kindnet-498531" to be "Ready" ...
	I1026 15:14:22.559939 1122250 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1026 15:14:22.995731 1122250 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:14:23.623993 1113766 pod_ready.go:94] pod "coredns-66bc5c9577-pnbct" is "Ready"
	I1026 15:14:23.624022 1113766 pod_ready.go:86] duration metric: took 41.506638475s for pod "coredns-66bc5c9577-pnbct" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.626676 1113766 pod_ready.go:83] waiting for pod "etcd-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.631235 1113766 pod_ready.go:94] pod "etcd-embed-certs-535130" is "Ready"
	I1026 15:14:23.631260 1113766 pod_ready.go:86] duration metric: took 4.560994ms for pod "etcd-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.633340 1113766 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.637977 1113766 pod_ready.go:94] pod "kube-apiserver-embed-certs-535130" is "Ready"
	I1026 15:14:23.638002 1113766 pod_ready.go:86] duration metric: took 4.63905ms for pod "kube-apiserver-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.640311 1113766 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.821988 1113766 pod_ready.go:94] pod "kube-controller-manager-embed-certs-535130" is "Ready"
	I1026 15:14:23.822027 1113766 pod_ready.go:86] duration metric: took 181.691288ms for pod "kube-controller-manager-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:24.022415 1113766 pod_ready.go:83] waiting for pod "kube-proxy-nbr2d" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:24.422044 1113766 pod_ready.go:94] pod "kube-proxy-nbr2d" is "Ready"
	I1026 15:14:24.422081 1113766 pod_ready.go:86] duration metric: took 399.634014ms for pod "kube-proxy-nbr2d" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:24.622378 1113766 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:25.021678 1113766 pod_ready.go:94] pod "kube-scheduler-embed-certs-535130" is "Ready"
	I1026 15:14:25.021707 1113766 pod_ready.go:86] duration metric: took 399.302305ms for pod "kube-scheduler-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:25.021720 1113766 pod_ready.go:40] duration metric: took 42.913142082s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:25.068443 1113766 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:14:25.072045 1113766 out.go:179] * Done! kubectl is now configured to use "embed-certs-535130" cluster and "default" namespace by default
	I1026 15:14:22.997241 1122250 addons.go:514] duration metric: took 821.534236ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:14:23.064887 1122250 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-498531" context rescaled to 1 replicas
	W1026 15:14:24.563424 1122250 node_ready.go:57] node "kindnet-498531" has "Ready":"False" status (will retry)
	W1026 15:14:22.586876 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:25.084889 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:22.325114 1131084 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:14:22.325472 1131084 start.go:159] libmachine.API.Create for "calico-498531" (driver="docker")
	I1026 15:14:22.325515 1131084 client.go:168] LocalClient.Create starting
	I1026 15:14:22.325616 1131084 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:14:22.325662 1131084 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:22.325686 1131084 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:22.325769 1131084 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:14:22.325801 1131084 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:22.325814 1131084 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:22.326289 1131084 cli_runner.go:164] Run: docker network inspect calico-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:14:22.352348 1131084 cli_runner.go:211] docker network inspect calico-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:14:22.352457 1131084 network_create.go:284] running [docker network inspect calico-498531] to gather additional debugging logs...
	I1026 15:14:22.352482 1131084 cli_runner.go:164] Run: docker network inspect calico-498531
	W1026 15:14:22.375684 1131084 cli_runner.go:211] docker network inspect calico-498531 returned with exit code 1
	I1026 15:14:22.375723 1131084 network_create.go:287] error running [docker network inspect calico-498531]: docker network inspect calico-498531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-498531 not found
	I1026 15:14:22.375740 1131084 network_create.go:289] output of [docker network inspect calico-498531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-498531 not found
	
	** /stderr **
	I1026 15:14:22.375893 1131084 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:14:22.401795 1131084 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:14:22.403013 1131084 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:14:22.405711 1131084 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:14:22.406512 1131084 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c696734ed668 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:9a:3a:13:85:1e} reservation:<nil>}
	I1026 15:14:22.407969 1131084 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-eb8db690bfd7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:80:70:9a:55:40} reservation:<nil>}
	I1026 15:14:22.409485 1131084 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fde330}
	I1026 15:14:22.409584 1131084 network_create.go:124] attempt to create docker network calico-498531 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1026 15:14:22.409673 1131084 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-498531 calico-498531
	I1026 15:14:22.501874 1131084 network_create.go:108] docker network calico-498531 192.168.94.0/24 created
	I1026 15:14:22.501935 1131084 kic.go:121] calculated static IP "192.168.94.2" for the "calico-498531" container
	I1026 15:14:22.502006 1131084 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:14:22.526630 1131084 cli_runner.go:164] Run: docker volume create calico-498531 --label name.minikube.sigs.k8s.io=calico-498531 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:14:22.551624 1131084 oci.go:103] Successfully created a docker volume calico-498531
	I1026 15:14:22.551954 1131084 cli_runner.go:164] Run: docker run --rm --name calico-498531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-498531 --entrypoint /usr/bin/test -v calico-498531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:14:23.368364 1131084 oci.go:107] Successfully prepared a docker volume calico-498531
	I1026 15:14:23.368408 1131084 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:23.368433 1131084 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:14:23.368484 1131084 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1026 15:14:26.632252 1122250 node_ready.go:57] node "kindnet-498531" has "Ready":"False" status (will retry)
	W1026 15:14:29.063056 1122250 node_ready.go:57] node "kindnet-498531" has "Ready":"False" status (will retry)
	W1026 15:14:27.584521 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:30.083374 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:27.953615 1131084 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.585071065s)
	I1026 15:14:27.953649 1131084 kic.go:203] duration metric: took 4.585211147s to extract preloaded images to volume ...
	W1026 15:14:27.953747 1131084 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:14:27.953808 1131084 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:14:27.953860 1131084 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:14:28.015980 1131084 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-498531 --name calico-498531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-498531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-498531 --network calico-498531 --ip 192.168.94.2 --volume calico-498531:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:14:28.312900 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Running}}
	I1026 15:14:28.332498 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:28.352974 1131084 cli_runner.go:164] Run: docker exec calico-498531 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:14:28.400948 1131084 oci.go:144] the created container "calico-498531" has a running status.
	I1026 15:14:28.400983 1131084 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa...
	I1026 15:14:28.788269 1131084 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:14:28.817847 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:28.837847 1131084 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:14:28.837871 1131084 kic_runner.go:114] Args: [docker exec --privileged calico-498531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:14:28.880976 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:28.900659 1131084 machine.go:93] provisionDockerMachine start ...
	I1026 15:14:28.900758 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:28.920100 1131084 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:28.920491 1131084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33882 <nil> <nil>}
	I1026 15:14:28.920520 1131084 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:14:29.067639 1131084 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-498531
	
	I1026 15:14:29.067663 1131084 ubuntu.go:182] provisioning hostname "calico-498531"
	I1026 15:14:29.067734 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:29.087948 1131084 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:29.088204 1131084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33882 <nil> <nil>}
	I1026 15:14:29.088219 1131084 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-498531 && echo "calico-498531" | sudo tee /etc/hostname
	I1026 15:14:29.240877 1131084 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-498531
	
	I1026 15:14:29.240966 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:29.259356 1131084 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:29.259591 1131084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33882 <nil> <nil>}
	I1026 15:14:29.259612 1131084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-498531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-498531/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-498531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:14:29.403508 1131084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:14:29.403544 1131084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:14:29.403575 1131084 ubuntu.go:190] setting up certificates
	I1026 15:14:29.403592 1131084 provision.go:84] configureAuth start
	I1026 15:14:29.403661 1131084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-498531
	I1026 15:14:29.422912 1131084 provision.go:143] copyHostCerts
	I1026 15:14:29.422986 1131084 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:14:29.423000 1131084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:14:29.423089 1131084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:14:29.423243 1131084 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:14:29.423257 1131084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:14:29.423307 1131084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:14:29.423390 1131084 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:14:29.423400 1131084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:14:29.423437 1131084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:14:29.423514 1131084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.calico-498531 san=[127.0.0.1 192.168.94.2 calico-498531 localhost minikube]
	I1026 15:14:29.781033 1131084 provision.go:177] copyRemoteCerts
	I1026 15:14:29.781101 1131084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:14:29.781151 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:29.800111 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:29.903471 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:14:29.923827 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 15:14:29.942992 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:14:29.961590 1131084 provision.go:87] duration metric: took 557.975669ms to configureAuth
	I1026 15:14:29.961625 1131084 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:14:29.961848 1131084 config.go:182] Loaded profile config "calico-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:29.962010 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:29.980686 1131084 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:29.980906 1131084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33882 <nil> <nil>}
	I1026 15:14:29.980922 1131084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:14:30.245480 1131084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:14:30.245506 1131084 machine.go:96] duration metric: took 1.344824059s to provisionDockerMachine
	I1026 15:14:30.245515 1131084 client.go:171] duration metric: took 7.919992759s to LocalClient.Create
	I1026 15:14:30.245532 1131084 start.go:167] duration metric: took 7.920066064s to libmachine.API.Create "calico-498531"
	I1026 15:14:30.245539 1131084 start.go:293] postStartSetup for "calico-498531" (driver="docker")
	I1026 15:14:30.245549 1131084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:14:30.245607 1131084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:14:30.245646 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:30.263433 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:30.367021 1131084 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:14:30.371039 1131084 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:14:30.371075 1131084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:14:30.371086 1131084 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:14:30.371156 1131084 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:14:30.371273 1131084 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:14:30.371413 1131084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:14:30.379997 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:14:30.401834 1131084 start.go:296] duration metric: took 156.256952ms for postStartSetup
	I1026 15:14:30.402266 1131084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-498531
	I1026 15:14:30.421441 1131084 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/config.json ...
	I1026 15:14:30.421726 1131084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:14:30.421776 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:30.441111 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:30.540925 1131084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:14:30.546045 1131084 start.go:128] duration metric: took 8.223251302s to createHost
	I1026 15:14:30.546069 1131084 start.go:83] releasing machines lock for "calico-498531", held for 8.223429226s
	I1026 15:14:30.546143 1131084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-498531
	I1026 15:14:30.564531 1131084 ssh_runner.go:195] Run: cat /version.json
	I1026 15:14:30.564591 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:30.564610 1131084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:14:30.564683 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:30.584193 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:30.584970 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:30.681718 1131084 ssh_runner.go:195] Run: systemctl --version
	I1026 15:14:30.740047 1131084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:14:30.777313 1131084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:14:30.782352 1131084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:14:30.782416 1131084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:14:30.813017 1131084 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:14:30.813046 1131084 start.go:495] detecting cgroup driver to use...
	I1026 15:14:30.813083 1131084 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:14:30.813130 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:14:30.830799 1131084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:14:30.844484 1131084 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:14:30.844543 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:14:30.862378 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:14:30.883515 1131084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:14:30.969532 1131084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:14:31.061732 1131084 docker.go:234] disabling docker service ...
	I1026 15:14:31.061830 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:14:31.081856 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:14:31.095826 1131084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:14:31.181238 1131084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:14:31.267605 1131084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:14:31.281188 1131084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:14:31.296556 1131084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:14:31.296624 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.308522 1131084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:14:31.308593 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.318361 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.328061 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.338149 1131084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:14:31.347397 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.357082 1131084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.372068 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.381727 1131084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:14:31.389780 1131084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:14:31.397805 1131084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:31.483517 1131084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:14:31.597744 1131084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:14:31.597822 1131084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:14:31.602759 1131084 start.go:563] Will wait 60s for crictl version
	I1026 15:14:31.602815 1131084 ssh_runner.go:195] Run: which crictl
	I1026 15:14:31.607206 1131084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:14:31.632818 1131084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:14:31.632898 1131084 ssh_runner.go:195] Run: crio --version
	I1026 15:14:31.663087 1131084 ssh_runner.go:195] Run: crio --version
	I1026 15:14:31.694487 1131084 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:14:31.695628 1131084 cli_runner.go:164] Run: docker network inspect calico-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:14:31.714194 1131084 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1026 15:14:31.718733 1131084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:14:31.729794 1131084 kubeadm.go:883] updating cluster {Name:calico-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:14:31.729983 1131084 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:31.730056 1131084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:14:31.764478 1131084 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:14:31.764505 1131084 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:14:31.764565 1131084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:14:31.791829 1131084 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:14:31.791853 1131084 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:14:31.791860 1131084 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1026 15:14:31.791965 1131084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-498531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1026 15:14:31.792048 1131084 ssh_runner.go:195] Run: crio config
	I1026 15:14:31.844095 1131084 cni.go:84] Creating CNI manager for "calico"
	I1026 15:14:31.844133 1131084 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:14:31.844158 1131084 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-498531 NodeName:calico-498531 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:14:31.844322 1131084 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-498531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:14:31.844393 1131084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:14:31.853309 1131084 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:14:31.853377 1131084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:14:31.862147 1131084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 15:14:31.877248 1131084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:14:31.893399 1131084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1026 15:14:31.907486 1131084 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:14:31.911580 1131084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:14:31.922598 1131084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:32.010841 1131084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:14:32.037983 1131084 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531 for IP: 192.168.94.2
	I1026 15:14:32.038013 1131084 certs.go:195] generating shared ca certs ...
	I1026 15:14:32.038038 1131084 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.038269 1131084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:14:32.038333 1131084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:14:32.038349 1131084 certs.go:257] generating profile certs ...
	I1026 15:14:32.038425 1131084 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.key
	I1026 15:14:32.038450 1131084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.crt with IP's: []
	I1026 15:14:32.312156 1131084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.crt ...
	I1026 15:14:32.312197 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.crt: {Name:mkb1b43c58262db718e8d170148fae0d52eb48ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.312410 1131084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.key ...
	I1026 15:14:32.312425 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.key: {Name:mk81742df0d97949e17864edaa296faedfa8e131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.312548 1131084 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key.15cc02cc
	I1026 15:14:32.312572 1131084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt.15cc02cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1026 15:14:32.478187 1131084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt.15cc02cc ...
	I1026 15:14:32.478219 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt.15cc02cc: {Name:mkd6c3652a23627fe5a24bc0bd1949a08592c079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.478410 1131084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key.15cc02cc ...
	I1026 15:14:32.478458 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key.15cc02cc: {Name:mk174b8926c3607c3963f01d1b77c59a2bc6d1ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.478585 1131084 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt.15cc02cc -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt
	I1026 15:14:32.478701 1131084 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key.15cc02cc -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key
	I1026 15:14:32.478800 1131084 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.key
	I1026 15:14:32.478822 1131084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.crt with IP's: []
	I1026 15:14:32.654880 1131084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.crt ...
	I1026 15:14:32.654920 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.crt: {Name:mk84b0d54f743f6528098ad79518e74e2839c1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.655127 1131084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.key ...
	I1026 15:14:32.655148 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.key: {Name:mk66d0d55babfe35bce01e440e581ddbb80f8423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.655395 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:14:32.655459 1131084 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:14:32.655476 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:14:32.655510 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:14:32.655546 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:14:32.655582 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:14:32.655643 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:14:32.656352 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:14:32.675929 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:14:32.694678 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:14:32.712972 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:14:32.731712 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 15:14:32.751416 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:14:32.770380 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:14:32.788707 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:14:32.807966 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:14:32.828197 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:14:32.847795 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:14:32.868432 1131084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:14:32.882369 1131084 ssh_runner.go:195] Run: openssl version
	I1026 15:14:32.889080 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:14:32.899603 1131084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:14:32.905031 1131084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:14:32.905112 1131084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:14:32.943270 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:14:32.952802 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:14:32.962090 1131084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:14:32.966024 1131084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:14:32.966087 1131084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:14:33.004317 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:14:33.014144 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:14:33.022942 1131084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:33.026969 1131084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:33.027025 1131084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:33.063768 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:14:33.074766 1131084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:14:33.079127 1131084 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:14:33.079208 1131084 kubeadm.go:400] StartCluster: {Name:calico-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:14:33.079307 1131084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:14:33.079389 1131084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:14:33.113757 1131084 cri.go:89] found id: ""
	I1026 15:14:33.113847 1131084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:14:33.123064 1131084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:14:33.131834 1131084 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:14:33.131898 1131084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:14:33.140319 1131084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:14:33.140344 1131084 kubeadm.go:157] found existing configuration files:
	
	I1026 15:14:33.140390 1131084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:14:33.148960 1131084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:14:33.149019 1131084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:14:33.157003 1131084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:14:33.165183 1131084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:14:33.165253 1131084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:14:33.173647 1131084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:14:33.181959 1131084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:14:33.182027 1131084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:14:33.189656 1131084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:14:33.197834 1131084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:14:33.197896 1131084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:14:33.205619 1131084 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:14:33.243805 1131084 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:14:33.244498 1131084 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:14:33.265901 1131084 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:14:33.265994 1131084 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:14:33.266065 1131084 kubeadm.go:318] OS: Linux
	I1026 15:14:33.266196 1131084 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:14:33.266279 1131084 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:14:33.266346 1131084 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:14:33.266438 1131084 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:14:33.266526 1131084 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:14:33.266601 1131084 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:14:33.266699 1131084 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:14:33.266779 1131084 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:14:33.327889 1131084 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:14:33.328064 1131084 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:14:33.328217 1131084 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:14:33.336945 1131084 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:14:33.339010 1131084 out.go:252]   - Generating certificates and keys ...
	I1026 15:14:33.339092 1131084 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:14:33.339190 1131084 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:14:33.406640 1131084 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:14:33.567594 1131084 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:14:33.725919 1131084 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:14:33.844049 1131084 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:14:33.874815 1131084 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:14:33.875341 1131084 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-498531 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1026 15:14:34.176333 1131084 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:14:34.176578 1131084 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-498531 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1026 15:14:34.240669 1131084 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:14:34.477448 1131084 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:14:34.581009 1131084 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:14:34.581250 1131084 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:14:35.129273 1131084 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:14:35.271657 1131084 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:14:35.495983 1131084 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:14:35.623505 1131084 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:14:35.900652 1131084 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:14:35.901283 1131084 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:14:35.905235 1131084 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1026 15:14:31.063275 1122250 node_ready.go:57] node "kindnet-498531" has "Ready":"False" status (will retry)
	I1026 15:14:33.563117 1122250 node_ready.go:49] node "kindnet-498531" is "Ready"
	I1026 15:14:33.563150 1122250 node_ready.go:38] duration metric: took 11.003589447s for node "kindnet-498531" to be "Ready" ...
	I1026 15:14:33.563199 1122250 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:14:33.563266 1122250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:14:33.576858 1122250 api_server.go:72] duration metric: took 11.401224049s to wait for apiserver process to appear ...
	I1026 15:14:33.576891 1122250 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:14:33.576917 1122250 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:14:33.581421 1122250 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 15:14:33.582522 1122250 api_server.go:141] control plane version: v1.34.1
	I1026 15:14:33.582565 1122250 api_server.go:131] duration metric: took 5.66448ms to wait for apiserver health ...
	I1026 15:14:33.582575 1122250 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:14:33.586724 1122250 system_pods.go:59] 8 kube-system pods found
	I1026 15:14:33.586771 1122250 system_pods.go:61] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:33.586785 1122250 system_pods.go:61] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:33.586803 1122250 system_pods.go:61] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:33.586809 1122250 system_pods.go:61] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:33.586817 1122250 system_pods.go:61] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:33.586823 1122250 system_pods.go:61] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:33.586830 1122250 system_pods.go:61] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:33.586837 1122250 system_pods.go:61] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:33.586849 1122250 system_pods.go:74] duration metric: took 4.265706ms to wait for pod list to return data ...
	I1026 15:14:33.586866 1122250 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:14:33.589645 1122250 default_sa.go:45] found service account: "default"
	I1026 15:14:33.589670 1122250 default_sa.go:55] duration metric: took 2.793476ms for default service account to be created ...
	I1026 15:14:33.589682 1122250 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:14:33.593125 1122250 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:33.593221 1122250 system_pods.go:89] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:33.593236 1122250 system_pods.go:89] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:33.593245 1122250 system_pods.go:89] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:33.593266 1122250 system_pods.go:89] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:33.593276 1122250 system_pods.go:89] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:33.593282 1122250 system_pods.go:89] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:33.593287 1122250 system_pods.go:89] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:33.593298 1122250 system_pods.go:89] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:33.593330 1122250 retry.go:31] will retry after 280.883566ms: missing components: kube-dns
	I1026 15:14:33.878127 1122250 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:33.878155 1122250 system_pods.go:89] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:33.878175 1122250 system_pods.go:89] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:33.878199 1122250 system_pods.go:89] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:33.878209 1122250 system_pods.go:89] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:33.878213 1122250 system_pods.go:89] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:33.878216 1122250 system_pods.go:89] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:33.878220 1122250 system_pods.go:89] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:33.878234 1122250 system_pods.go:89] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:33.878250 1122250 retry.go:31] will retry after 306.998461ms: missing components: kube-dns
	I1026 15:14:34.189580 1122250 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:34.189611 1122250 system_pods.go:89] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:34.189616 1122250 system_pods.go:89] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:34.189623 1122250 system_pods.go:89] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:34.189626 1122250 system_pods.go:89] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:34.189629 1122250 system_pods.go:89] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:34.189633 1122250 system_pods.go:89] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:34.189637 1122250 system_pods.go:89] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:34.189641 1122250 system_pods.go:89] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:34.189657 1122250 retry.go:31] will retry after 346.706947ms: missing components: kube-dns
	I1026 15:14:34.541594 1122250 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:34.541629 1122250 system_pods.go:89] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Running
	I1026 15:14:34.541637 1122250 system_pods.go:89] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:34.541643 1122250 system_pods.go:89] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:34.541647 1122250 system_pods.go:89] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:34.541651 1122250 system_pods.go:89] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:34.541657 1122250 system_pods.go:89] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:34.541662 1122250 system_pods.go:89] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:34.541666 1122250 system_pods.go:89] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Running
	I1026 15:14:34.541677 1122250 system_pods.go:126] duration metric: took 951.988355ms to wait for k8s-apps to be running ...
	I1026 15:14:34.541690 1122250 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:14:34.541750 1122250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:34.555570 1122250 system_svc.go:56] duration metric: took 13.867572ms WaitForService to wait for kubelet
	I1026 15:14:34.555604 1122250 kubeadm.go:586] duration metric: took 12.379977538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:14:34.555624 1122250 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:14:34.558916 1122250 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:14:34.558944 1122250 node_conditions.go:123] node cpu capacity is 8
	I1026 15:14:34.558958 1122250 node_conditions.go:105] duration metric: took 3.329095ms to run NodePressure ...
	I1026 15:14:34.558970 1122250 start.go:241] waiting for startup goroutines ...
	I1026 15:14:34.558977 1122250 start.go:246] waiting for cluster config update ...
	I1026 15:14:34.558987 1122250 start.go:255] writing updated cluster config ...
	I1026 15:14:34.559299 1122250 ssh_runner.go:195] Run: rm -f paused
	I1026 15:14:34.563782 1122250 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:34.567670 1122250 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-95sqq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.572469 1122250 pod_ready.go:94] pod "coredns-66bc5c9577-95sqq" is "Ready"
	I1026 15:14:34.572496 1122250 pod_ready.go:86] duration metric: took 4.797495ms for pod "coredns-66bc5c9577-95sqq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.574819 1122250 pod_ready.go:83] waiting for pod "etcd-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.579391 1122250 pod_ready.go:94] pod "etcd-kindnet-498531" is "Ready"
	I1026 15:14:34.579418 1122250 pod_ready.go:86] duration metric: took 4.57146ms for pod "etcd-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.581636 1122250 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.586229 1122250 pod_ready.go:94] pod "kube-apiserver-kindnet-498531" is "Ready"
	I1026 15:14:34.586254 1122250 pod_ready.go:86] duration metric: took 4.595986ms for pod "kube-apiserver-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.588554 1122250 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.968445 1122250 pod_ready.go:94] pod "kube-controller-manager-kindnet-498531" is "Ready"
	I1026 15:14:34.968480 1122250 pod_ready.go:86] duration metric: took 379.902384ms for pod "kube-controller-manager-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:35.168998 1122250 pod_ready.go:83] waiting for pod "kube-proxy-8jlfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:35.568757 1122250 pod_ready.go:94] pod "kube-proxy-8jlfc" is "Ready"
	I1026 15:14:35.568788 1122250 pod_ready.go:86] duration metric: took 399.752935ms for pod "kube-proxy-8jlfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:35.768804 1122250 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:36.169001 1122250 pod_ready.go:94] pod "kube-scheduler-kindnet-498531" is "Ready"
	I1026 15:14:36.169041 1122250 pod_ready.go:86] duration metric: took 400.205487ms for pod "kube-scheduler-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:36.169057 1122250 pod_ready.go:40] duration metric: took 1.605242517s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:36.217396 1122250 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:14:36.220298 1122250 out.go:179] * Done! kubectl is now configured to use "kindnet-498531" cluster and "default" namespace by default
	W1026 15:14:32.084499 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:34.584900 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:35.906556 1131084 out.go:252]   - Booting up control plane ...
	I1026 15:14:35.906673 1131084 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:14:35.906764 1131084 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:14:35.907439 1131084 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:14:35.921886 1131084 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:14:35.922023 1131084 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:14:35.930452 1131084 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:14:35.930988 1131084 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:14:35.931055 1131084 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:14:36.036435 1131084 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:14:36.036605 1131084 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.371883338Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bb3977ec506badb0ae29a644f519871fae1c030f4d2dddd78425bcbddff1f3be/merged/etc/passwd: no such file or directory"
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.371929173Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bb3977ec506badb0ae29a644f519871fae1c030f4d2dddd78425bcbddff1f3be/merged/etc/group: no such file or directory"
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.372272495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.414251681Z" level=info msg="Created container 239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7: kube-system/storage-provisioner/storage-provisioner" id=a68eb85a-3276-4e30-a8dd-9b5ffd46da1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.415679089Z" level=info msg="Starting container: 239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7" id=9b813a82-e0a3-4e35-b538-e12831b46ebe name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.419926346Z" level=info msg="Started container" PID=1712 containerID=239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7 description=kube-system/storage-provisioner/storage-provisioner id=9b813a82-e0a3-4e35-b538-e12831b46ebe name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d181ec3b3c584db592a440aaecf49bdf46c00f0787eeed83260a5822d7e015e
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.175596644Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.183074584Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.183110921Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.183128704Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.194653425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.194842346Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.195100225Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.203747241Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.204291277Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.205252123Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.212590893Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.212625466Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.203130385Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b044f21a-bb6b-456c-8b96-e37816b86b3b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.204300287Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8fd08f86-4dd9-478e-83bc-b520c479292b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.205543752Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9/dashboard-metrics-scraper" id=1d8946c6-cdab-4b07-9f86-67d901e9ee7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.205720317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.212191342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.212883554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.304367133Z" level=info msg="CreateCtr: context was either canceled or the deadline was exceeded: context canceled" id=1d8946c6-cdab-4b07-9f86-67d901e9ee7f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	239e148b0a6d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   8d181ec3b3c58       storage-provisioner                          kube-system
	3102e249f41ed       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   360206fcbb1bf       dashboard-metrics-scraper-6ffb444bf9-ld9k9   kubernetes-dashboard
	c2406044b7f31       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   69575968b6bcf       kubernetes-dashboard-855c9754f9-8p6g2        kubernetes-dashboard
	043a8fb5117eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   ee5ea96ac09da       busybox                                      default
	0e893e41892fa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   6624abc8e9db7       coredns-66bc5c9577-pnbct                     kube-system
	4cd2c8e35ef08       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           58 seconds ago       Running             kube-proxy                  0                   e832114b001be       kube-proxy-nbr2d                             kube-system
	5e1d1087d88f6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   b4fe48d1eaf92       kindnet-mlqjm                                kube-system
	fd864c01850c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   8d181ec3b3c58       storage-provisioner                          kube-system
	79f294b1af537       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   f30050e954087       kube-apiserver-embed-certs-535130            kube-system
	0cf664b8ea8fd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   13eb1fc659970       kube-controller-manager-embed-certs-535130   kube-system
	43565d9e19139       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   91b9612dd56a7       kube-scheduler-embed-certs-535130            kube-system
	7f30d07b339ab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   6cbb7d507b7ef       etcd-embed-certs-535130                      kube-system
	
	
	==> coredns [0e893e41892fa12c7ec68b76a502b7a243a84d94912ec68bf8757235766702b0] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39199 - 17047 "HINFO IN 1070022183893654826.2872744642068059383. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.077651463s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-535130
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-535130
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=embed-certs-535130
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_12_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:12:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-535130
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:14:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:14:21 +0000   Sun, 26 Oct 2025 15:12:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:14:21 +0000   Sun, 26 Oct 2025 15:12:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:14:21 +0000   Sun, 26 Oct 2025 15:12:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:14:21 +0000   Sun, 26 Oct 2025 15:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-535130
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d2eb1dd1-3767-46c2-b62f-7198c6aeeadd
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-pnbct                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-embed-certs-535130                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-mlqjm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-embed-certs-535130             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-embed-certs-535130    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-nbr2d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-embed-certs-535130             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ld9k9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8p6g2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node embed-certs-535130 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node embed-certs-535130 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node embed-certs-535130 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node embed-certs-535130 event: Registered Node embed-certs-535130 in Controller
	  Normal  NodeReady                100s               kubelet          Node embed-certs-535130 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node embed-certs-535130 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node embed-certs-535130 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node embed-certs-535130 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node embed-certs-535130 event: Registered Node embed-certs-535130 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [7f30d07b339ab7331f72cd45f5f34ee9c7eb82bec1197a77db9c34d2fcb6c24b] <==
	{"level":"warn","ts":"2025-10-26T15:13:39.756233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.763132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.770089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.780861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.788619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.796046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.804112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.812807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.821587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.830668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.838417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.855577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.863025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.871140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.879509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.887455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.895062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.913155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.921460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.930321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.982724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44424","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T15:14:26.329610Z","caller":"traceutil/trace.go:172","msg":"trace[1876156884] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"123.588948ms","start":"2025-10-26T15:14:26.206002Z","end":"2025-10-26T15:14:26.329591Z","steps":["trace[1876156884] 'process raft request'  (duration: 123.461118ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:14:27.504731Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.434489ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-26T15:14:27.504889Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.530082ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356222968320785 > lease_revoke:<id:59069a21150bd27e>","response":"size:28"}
	{"level":"info","ts":"2025-10-26T15:14:27.504935Z","caller":"traceutil/trace.go:172","msg":"trace[1520001001] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:634; }","duration":"128.666954ms","start":"2025-10-26T15:14:27.376249Z","end":"2025-10-26T15:14:27.504916Z","steps":["trace[1520001001] 'range keys from in-memory index tree'  (duration: 128.414716ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:14:39 up  2:57,  0 user,  load average: 4.32, 3.13, 2.02
	Linux embed-certs-535130 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e1d1087d88f63dfa08475c5c3d49f7e0a5ce8b0ccdf279101ffe4c56c135534] <==
	I1026 15:13:41.876837       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:13:41.969311       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:13:41.969732       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:13:41.969757       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:13:41.969785       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:13:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:13:42.175371       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:13:42.175398       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:13:42.175409       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:13:42.176441       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:14:12.176512       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 15:14:12.176632       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:14:12.176678       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:14:12.176686       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 15:14:13.775585       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:14:13.775637       1 metrics.go:72] Registering metrics
	I1026 15:14:13.775730       1 controller.go:711] "Syncing nftables rules"
	I1026 15:14:22.175052       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:14:22.175119       1 main.go:301] handling current node
	I1026 15:14:32.182876       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:14:32.182914       1 main.go:301] handling current node
	
	
	==> kube-apiserver [79f294b1af5377dbbe09bff36c0ce752c337fff26f468f52ba372eeae7c2fbd7] <==
	I1026 15:13:40.534093       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:13:40.534297       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:13:40.534399       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:13:40.534607       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 15:13:40.534652       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:13:40.534668       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:13:40.534675       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:13:40.534676       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:13:40.534682       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:13:40.534697       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 15:13:40.543683       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1026 15:13:40.551714       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:13:40.568594       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:13:40.578709       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:40.837457       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:13:40.866505       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:13:40.886462       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:13:40.896365       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:13:40.903139       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:13:40.938873       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.85.9"}
	I1026 15:13:40.949675       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.161.127"}
	I1026 15:13:41.442846       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:13:44.289523       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:13:44.390682       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:13:44.439367       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0cf664b8ea8fd4397a4e4d0903d086cb617b472ad1631050bc542a9e5c06ca09] <==
	I1026 15:13:43.885593       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:13:43.885744       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:13:43.885812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:13:43.885825       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:13:43.885835       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:13:43.885958       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:13:43.886099       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:13:43.886115       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 15:13:43.886303       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 15:13:43.886317       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 15:13:43.886331       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:13:43.886509       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:13:43.888238       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:13:43.890409       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:13:43.890533       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:13:43.890608       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-535130"
	I1026 15:13:43.890679       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:13:43.893190       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:13:43.893255       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:13:43.895425       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:13:43.895453       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:13:43.897553       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:13:43.902200       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:13:43.904536       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:13:43.907713       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4cd2c8e35ef08093cad19d86eb698b67b7f3efc33cc6e0f1b1f9e57148715d1d] <==
	I1026 15:13:41.785324       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:13:41.873276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:13:41.975128       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:13:41.975204       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 15:13:41.975285       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:13:42.000826       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:13:42.000915       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:13:42.008018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:13:42.008661       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:13:42.008957       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:42.013308       1 config.go:200] "Starting service config controller"
	I1026 15:13:42.014755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:13:42.014505       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:13:42.014889       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:13:42.014518       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:13:42.014942       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:13:42.014484       1 config.go:309] "Starting node config controller"
	I1026 15:13:42.014954       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:13:42.014960       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:13:42.115113       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:13:42.115202       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:13:42.115260       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [43565d9e1913984f12b45a1203fca769c7b760ccf18830408972ff108c39b9bf] <==
	I1026 15:13:39.739608       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:13:40.449193       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:13:40.449228       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:13:40.449240       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:13:40.449250       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:13:40.500577       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:13:40.500617       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:40.505923       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:13:40.506128       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:13:40.509611       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:13:40.509983       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:13:40.608859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:13:44 embed-certs-535130 kubelet[732]: I1026 15:13:44.602901     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gvz\" (UniqueName: \"kubernetes.io/projected/70961b5a-99b2-49fb-ab32-0ea0c0780577-kube-api-access-d2gvz\") pod \"dashboard-metrics-scraper-6ffb444bf9-ld9k9\" (UID: \"70961b5a-99b2-49fb-ab32-0ea0c0780577\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9"
	Oct 26 15:13:44 embed-certs-535130 kubelet[732]: I1026 15:13:44.602945     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbv4q\" (UniqueName: \"kubernetes.io/projected/0a6afa02-36ae-4637-8893-3f91d7a0fa0e-kube-api-access-fbv4q\") pod \"kubernetes-dashboard-855c9754f9-8p6g2\" (UID: \"0a6afa02-36ae-4637-8893-3f91d7a0fa0e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8p6g2"
	Oct 26 15:13:47 embed-certs-535130 kubelet[732]: I1026 15:13:47.265961     732 scope.go:117] "RemoveContainer" containerID="d720186b30651c6b7eb83d4718ca99b3b5e2e3982338e87a3f38eec6fb1541b5"
	Oct 26 15:13:48 embed-certs-535130 kubelet[732]: I1026 15:13:48.271401     732 scope.go:117] "RemoveContainer" containerID="d720186b30651c6b7eb83d4718ca99b3b5e2e3982338e87a3f38eec6fb1541b5"
	Oct 26 15:13:48 embed-certs-535130 kubelet[732]: I1026 15:13:48.271577     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:13:48 embed-certs-535130 kubelet[732]: E1026 15:13:48.271780     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:13:49 embed-certs-535130 kubelet[732]: I1026 15:13:49.276118     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:13:49 embed-certs-535130 kubelet[732]: E1026 15:13:49.276357     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:13:51 embed-certs-535130 kubelet[732]: I1026 15:13:51.300329     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8p6g2" podStartSLOduration=1.193335998 podStartE2EDuration="7.30029739s" podCreationTimestamp="2025-10-26 15:13:44 +0000 UTC" firstStartedPulling="2025-10-26 15:13:44.829624028 +0000 UTC m=+6.732124344" lastFinishedPulling="2025-10-26 15:13:50.936585485 +0000 UTC m=+12.839085736" observedRunningTime="2025-10-26 15:13:51.298409894 +0000 UTC m=+13.200910154" watchObservedRunningTime="2025-10-26 15:13:51.30029739 +0000 UTC m=+13.202797649"
	Oct 26 15:13:52 embed-certs-535130 kubelet[732]: I1026 15:13:52.725673     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:13:52 embed-certs-535130 kubelet[732]: E1026 15:13:52.725910     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:14:07 embed-certs-535130 kubelet[732]: I1026 15:14:07.202315     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:14:07 embed-certs-535130 kubelet[732]: I1026 15:14:07.328504     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:14:07 embed-certs-535130 kubelet[732]: I1026 15:14:07.328751     732 scope.go:117] "RemoveContainer" containerID="3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	Oct 26 15:14:07 embed-certs-535130 kubelet[732]: E1026 15:14:07.329364     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:14:12 embed-certs-535130 kubelet[732]: I1026 15:14:12.357023     732 scope.go:117] "RemoveContainer" containerID="fd864c01850c3a39fcff70d2a1c10ffa508c1d4673cb99b9ac1d5cb6d772026e"
	Oct 26 15:14:12 embed-certs-535130 kubelet[732]: I1026 15:14:12.725916     732 scope.go:117] "RemoveContainer" containerID="3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	Oct 26 15:14:12 embed-certs-535130 kubelet[732]: E1026 15:14:12.726135     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:14:26 embed-certs-535130 kubelet[732]: I1026 15:14:26.202153     732 scope.go:117] "RemoveContainer" containerID="3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	Oct 26 15:14:26 embed-certs-535130 kubelet[732]: E1026 15:14:26.202447     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:14:37 embed-certs-535130 kubelet[732]: I1026 15:14:37.202493     732 scope.go:117] "RemoveContainer" containerID="3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	Oct 26 15:14:37 embed-certs-535130 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:14:37 embed-certs-535130 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:14:37 embed-certs-535130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:14:37 embed-certs-535130 systemd[1]: kubelet.service: Consumed 1.891s CPU time.
	
	
	==> kubernetes-dashboard [c2406044b7f315c5b1ee3f4019f3a406d40d7ef84f78714460b6156504465324] <==
	2025/10/26 15:13:51 Using namespace: kubernetes-dashboard
	2025/10/26 15:13:51 Using in-cluster config to connect to apiserver
	2025/10/26 15:13:51 Using secret token for csrf signing
	2025/10/26 15:13:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:13:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:13:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:13:51 Generating JWE encryption key
	2025/10/26 15:13:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:13:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:13:51 Initializing JWE encryption key from synchronized object
	2025/10/26 15:13:51 Creating in-cluster Sidecar client
	2025/10/26 15:13:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:13:51 Serving insecurely on HTTP port: 9090
	2025/10/26 15:14:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:13:51 Starting overwatch
	
	
	==> storage-provisioner [239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7] <==
	I1026 15:14:12.451788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:14:12.451875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:14:12.455152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:15.909878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:20.173152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:23.771594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:26.825514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:29.847852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:29.852425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:14:29.852591       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:14:29.852736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-535130_e6c1962c-4ba7-47de-82fa-3cd0dbdc059b!
	I1026 15:14:29.852711       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f6cf1c0-4446-438f-959a-2cf0430f7cb8", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-535130_e6c1962c-4ba7-47de-82fa-3cd0dbdc059b became leader
	W1026 15:14:29.854933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:29.858810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:14:29.953839       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-535130_e6c1962c-4ba7-47de-82fa-3cd0dbdc059b!
	W1026 15:14:31.862618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:31.866942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:33.870919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:33.879657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:35.882735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:35.886590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:37.889613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:37.894971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:39.898056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:39.902098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fd864c01850c3a39fcff70d2a1c10ffa508c1d4673cb99b9ac1d5cb6d772026e] <==
	I1026 15:13:41.681436       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:14:11.685702       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-535130 -n embed-certs-535130
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-535130 -n embed-certs-535130: exit status 2 (382.311311ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-535130 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-535130
helpers_test.go:243: (dbg) docker inspect embed-certs-535130:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36",
	        "Created": "2025-10-26T15:12:28.122091236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1114009,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:13:31.415539146Z",
	            "FinishedAt": "2025-10-26T15:13:30.484008333Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/hostname",
	        "HostsPath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/hosts",
	        "LogPath": "/var/lib/docker/containers/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36/51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36-json.log",
	        "Name": "/embed-certs-535130",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-535130:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-535130",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51b1644009afa6cc6a6d9bc914c49eccef03f48b557ac0a8540a6c8848111e36",
	                "LowerDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f468af4290f0273a6f3d234c071dd725d57ae77fd19dd3a00f1d124df21e3267/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-535130",
	                "Source": "/var/lib/docker/volumes/embed-certs-535130/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-535130",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-535130",
	                "name.minikube.sigs.k8s.io": "embed-certs-535130",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "204653b8b321268d0c8cc60442bc19a90dc557b4c2a7b883efb8af5e6b54170a",
	            "SandboxKey": "/var/run/docker/netns/204653b8b321",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33862"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-535130": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:3e:65:d6:7b:90",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c696734ed668df0fca3efb0f7c1c0265275f09b80d9a59f85ab28b09787295d5",
	                    "EndpointID": "0f7fefc0af864babc78ea885345a53079d24f7387f6cc53b0aa5025d9fde6a38",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-535130",
	                        "51b1644009af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535130 -n embed-certs-535130
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535130 -n embed-certs-535130: exit status 2 (383.558816ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-535130 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-535130 logs -n 25: (1.221266335s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-498531 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo systemctl cat docker --no-pager                                                                                    │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /etc/docker/daemon.json                                                                                        │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo docker system info                                                                                                 │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cri-dockerd --version                                                                                              │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p auto-498531 sudo systemctl cat containerd --no-pager                                                                                │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo cat /etc/containerd/config.toml                                                                                    │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo containerd config dump                                                                                             │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo systemctl cat crio --no-pager                                                                                      │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p auto-498531 sudo crio config                                                                                                        │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ delete  │ -p auto-498531                                                                                                                         │ auto-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ start   │ -p calico-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-498531      │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ image   │ embed-certs-535130 image list --format=json                                                                                            │ embed-certs-535130 │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ pause   │ -p embed-certs-535130 --alsologtostderr -v=1                                                                                           │ embed-certs-535130 │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:14:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:14:22.027033 1131084 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:14:22.027189 1131084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:22.027197 1131084 out.go:374] Setting ErrFile to fd 2...
	I1026 15:14:22.027203 1131084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:22.027481 1131084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:14:22.028142 1131084 out.go:368] Setting JSON to false
	I1026 15:14:22.030037 1131084 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10610,"bootTime":1761481052,"procs":408,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:14:22.030199 1131084 start.go:141] virtualization: kvm guest
	I1026 15:14:22.033654 1131084 out.go:179] * [calico-498531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:14:22.035469 1131084 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:14:22.035521 1131084 notify.go:220] Checking for updates...
	I1026 15:14:22.038257 1131084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:14:22.039696 1131084 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:14:22.041116 1131084 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:14:22.045858 1131084 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:14:22.047496 1131084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:14:22.049256 1131084 config.go:182] Loaded profile config "default-k8s-diff-port-790012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:22.049393 1131084 config.go:182] Loaded profile config "embed-certs-535130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:22.049518 1131084 config.go:182] Loaded profile config "kindnet-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:22.049761 1131084 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:14:22.082087 1131084 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:14:22.082232 1131084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:22.158702 1131084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:14:22.145242478 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:14:22.158821 1131084 docker.go:318] overlay module found
	I1026 15:14:22.160896 1131084 out.go:179] * Using the docker driver based on user configuration
	I1026 15:14:21.082714 1122250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:21.582393 1122250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:22.082722 1122250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:22.172853 1122250 kubeadm.go:1113] duration metric: took 4.715500659s to wait for elevateKubeSystemPrivileges
	I1026 15:14:22.172889 1122250 kubeadm.go:402] duration metric: took 16.379310994s to StartCluster
	I1026 15:14:22.172911 1122250 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:22.172985 1122250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:14:22.175304 1122250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:22.175585 1122250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:14:22.175587 1122250 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:14:22.175698 1122250 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:14:22.175805 1122250 config.go:182] Loaded profile config "kindnet-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:22.175815 1122250 addons.go:69] Setting default-storageclass=true in profile "kindnet-498531"
	I1026 15:14:22.175832 1122250 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-498531"
	I1026 15:14:22.175807 1122250 addons.go:69] Setting storage-provisioner=true in profile "kindnet-498531"
	I1026 15:14:22.175869 1122250 addons.go:238] Setting addon storage-provisioner=true in "kindnet-498531"
	I1026 15:14:22.175897 1122250 host.go:66] Checking if "kindnet-498531" exists ...
	I1026 15:14:22.176275 1122250 cli_runner.go:164] Run: docker container inspect kindnet-498531 --format={{.State.Status}}
	I1026 15:14:22.176713 1122250 cli_runner.go:164] Run: docker container inspect kindnet-498531 --format={{.State.Status}}
	I1026 15:14:22.178384 1122250 out.go:179] * Verifying Kubernetes components...
	I1026 15:14:22.179747 1122250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:22.225875 1122250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:14:22.164829 1131084 start.go:305] selected driver: docker
	I1026 15:14:22.164855 1131084 start.go:925] validating driver "docker" against <nil>
	I1026 15:14:22.164873 1131084 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:14:22.165618 1131084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:22.283220 1131084 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-26 15:14:22.261984462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:14:22.283686 1131084 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:14:22.284045 1131084 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:14:22.286031 1131084 out.go:179] * Using Docker driver with root privileges
	I1026 15:14:22.287355 1131084 cni.go:84] Creating CNI manager for "calico"
	I1026 15:14:22.287381 1131084 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1026 15:14:22.287541 1131084 start.go:349] cluster config:
	{Name:calico-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:14:22.289718 1131084 out.go:179] * Starting "calico-498531" primary control-plane node in "calico-498531" cluster
	I1026 15:14:22.291056 1131084 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:14:22.293520 1131084 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:14:22.295704 1131084 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:22.295766 1131084 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:14:22.295776 1131084 cache.go:58] Caching tarball of preloaded images
	I1026 15:14:22.295848 1131084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:14:22.295922 1131084 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:14:22.295935 1131084 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:14:22.296068 1131084 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/config.json ...
	I1026 15:14:22.296094 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/config.json: {Name:mk608ef37dee609688bd00cb752182a38a72f55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:22.322410 1131084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:14:22.322446 1131084 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:14:22.322469 1131084 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:14:22.322508 1131084 start.go:360] acquireMachinesLock for calico-498531: {Name:mkad5fbf5f1a91b92ec641cca7eb150eb880ccbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:14:22.322626 1131084 start.go:364] duration metric: took 94.47µs to acquireMachinesLock for "calico-498531"
	I1026 15:14:22.322657 1131084 start.go:93] Provisioning new machine with config: &{Name:calico-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:14:22.322774 1131084 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:14:22.226245 1122250 addons.go:238] Setting addon default-storageclass=true in "kindnet-498531"
	I1026 15:14:22.226315 1122250 host.go:66] Checking if "kindnet-498531" exists ...
	I1026 15:14:22.226830 1122250 cli_runner.go:164] Run: docker container inspect kindnet-498531 --format={{.State.Status}}
	I1026 15:14:22.228152 1122250 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:14:22.228288 1122250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:14:22.229229 1122250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-498531
	I1026 15:14:22.266297 1122250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33872 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/kindnet-498531/id_rsa Username:docker}
	I1026 15:14:22.269327 1122250 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:14:22.269464 1122250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:14:22.269588 1122250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-498531
	I1026 15:14:22.295754 1122250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33872 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/kindnet-498531/id_rsa Username:docker}
	I1026 15:14:22.314656 1122250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:14:22.376057 1122250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:14:22.408442 1122250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:14:22.436007 1122250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:14:22.559507 1122250 node_ready.go:35] waiting up to 15m0s for node "kindnet-498531" to be "Ready" ...
	I1026 15:14:22.559939 1122250 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1026 15:14:22.995731 1122250 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:14:23.623993 1113766 pod_ready.go:94] pod "coredns-66bc5c9577-pnbct" is "Ready"
	I1026 15:14:23.624022 1113766 pod_ready.go:86] duration metric: took 41.506638475s for pod "coredns-66bc5c9577-pnbct" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.626676 1113766 pod_ready.go:83] waiting for pod "etcd-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.631235 1113766 pod_ready.go:94] pod "etcd-embed-certs-535130" is "Ready"
	I1026 15:14:23.631260 1113766 pod_ready.go:86] duration metric: took 4.560994ms for pod "etcd-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.633340 1113766 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.637977 1113766 pod_ready.go:94] pod "kube-apiserver-embed-certs-535130" is "Ready"
	I1026 15:14:23.638002 1113766 pod_ready.go:86] duration metric: took 4.63905ms for pod "kube-apiserver-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.640311 1113766 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:23.821988 1113766 pod_ready.go:94] pod "kube-controller-manager-embed-certs-535130" is "Ready"
	I1026 15:14:23.822027 1113766 pod_ready.go:86] duration metric: took 181.691288ms for pod "kube-controller-manager-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:24.022415 1113766 pod_ready.go:83] waiting for pod "kube-proxy-nbr2d" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:24.422044 1113766 pod_ready.go:94] pod "kube-proxy-nbr2d" is "Ready"
	I1026 15:14:24.422081 1113766 pod_ready.go:86] duration metric: took 399.634014ms for pod "kube-proxy-nbr2d" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:24.622378 1113766 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:25.021678 1113766 pod_ready.go:94] pod "kube-scheduler-embed-certs-535130" is "Ready"
	I1026 15:14:25.021707 1113766 pod_ready.go:86] duration metric: took 399.302305ms for pod "kube-scheduler-embed-certs-535130" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:25.021720 1113766 pod_ready.go:40] duration metric: took 42.913142082s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:25.068443 1113766 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:14:25.072045 1113766 out.go:179] * Done! kubectl is now configured to use "embed-certs-535130" cluster and "default" namespace by default
	I1026 15:14:22.997241 1122250 addons.go:514] duration metric: took 821.534236ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:14:23.064887 1122250 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-498531" context rescaled to 1 replicas
	W1026 15:14:24.563424 1122250 node_ready.go:57] node "kindnet-498531" has "Ready":"False" status (will retry)
	W1026 15:14:22.586876 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:25.084889 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:22.325114 1131084 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:14:22.325472 1131084 start.go:159] libmachine.API.Create for "calico-498531" (driver="docker")
	I1026 15:14:22.325515 1131084 client.go:168] LocalClient.Create starting
	I1026 15:14:22.325616 1131084 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:14:22.325662 1131084 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:22.325686 1131084 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:22.325769 1131084 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:14:22.325801 1131084 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:22.325814 1131084 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:22.326289 1131084 cli_runner.go:164] Run: docker network inspect calico-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:14:22.352348 1131084 cli_runner.go:211] docker network inspect calico-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:14:22.352457 1131084 network_create.go:284] running [docker network inspect calico-498531] to gather additional debugging logs...
	I1026 15:14:22.352482 1131084 cli_runner.go:164] Run: docker network inspect calico-498531
	W1026 15:14:22.375684 1131084 cli_runner.go:211] docker network inspect calico-498531 returned with exit code 1
	I1026 15:14:22.375723 1131084 network_create.go:287] error running [docker network inspect calico-498531]: docker network inspect calico-498531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-498531 not found
	I1026 15:14:22.375740 1131084 network_create.go:289] output of [docker network inspect calico-498531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-498531 not found
	
	** /stderr **
	I1026 15:14:22.375893 1131084 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:14:22.401795 1131084 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:14:22.403013 1131084 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:14:22.405711 1131084 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:14:22.406512 1131084 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c696734ed668 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:9a:3a:13:85:1e} reservation:<nil>}
	I1026 15:14:22.407969 1131084 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-eb8db690bfd7 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:80:70:9a:55:40} reservation:<nil>}
	I1026 15:14:22.409485 1131084 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fde330}
	I1026 15:14:22.409584 1131084 network_create.go:124] attempt to create docker network calico-498531 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1026 15:14:22.409673 1131084 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-498531 calico-498531
	I1026 15:14:22.501874 1131084 network_create.go:108] docker network calico-498531 192.168.94.0/24 created
	I1026 15:14:22.501935 1131084 kic.go:121] calculated static IP "192.168.94.2" for the "calico-498531" container
	I1026 15:14:22.502006 1131084 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:14:22.526630 1131084 cli_runner.go:164] Run: docker volume create calico-498531 --label name.minikube.sigs.k8s.io=calico-498531 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:14:22.551624 1131084 oci.go:103] Successfully created a docker volume calico-498531
	I1026 15:14:22.551954 1131084 cli_runner.go:164] Run: docker run --rm --name calico-498531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-498531 --entrypoint /usr/bin/test -v calico-498531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:14:23.368364 1131084 oci.go:107] Successfully prepared a docker volume calico-498531
	I1026 15:14:23.368408 1131084 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:23.368433 1131084 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:14:23.368484 1131084 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1026 15:14:26.632252 1122250 node_ready.go:57] node "kindnet-498531" has "Ready":"False" status (will retry)
	W1026 15:14:29.063056 1122250 node_ready.go:57] node "kindnet-498531" has "Ready":"False" status (will retry)
	W1026 15:14:27.584521 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:30.083374 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:27.953615 1131084 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.585071065s)
	I1026 15:14:27.953649 1131084 kic.go:203] duration metric: took 4.585211147s to extract preloaded images to volume ...
	W1026 15:14:27.953747 1131084 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:14:27.953808 1131084 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:14:27.953860 1131084 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:14:28.015980 1131084 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-498531 --name calico-498531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-498531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-498531 --network calico-498531 --ip 192.168.94.2 --volume calico-498531:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:14:28.312900 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Running}}
	I1026 15:14:28.332498 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:28.352974 1131084 cli_runner.go:164] Run: docker exec calico-498531 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:14:28.400948 1131084 oci.go:144] the created container "calico-498531" has a running status.
	I1026 15:14:28.400983 1131084 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa...
	I1026 15:14:28.788269 1131084 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:14:28.817847 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:28.837847 1131084 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:14:28.837871 1131084 kic_runner.go:114] Args: [docker exec --privileged calico-498531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:14:28.880976 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:28.900659 1131084 machine.go:93] provisionDockerMachine start ...
	I1026 15:14:28.900758 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:28.920100 1131084 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:28.920491 1131084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33882 <nil> <nil>}
	I1026 15:14:28.920520 1131084 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:14:29.067639 1131084 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-498531
	
	I1026 15:14:29.067663 1131084 ubuntu.go:182] provisioning hostname "calico-498531"
	I1026 15:14:29.067734 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:29.087948 1131084 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:29.088204 1131084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33882 <nil> <nil>}
	I1026 15:14:29.088219 1131084 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-498531 && echo "calico-498531" | sudo tee /etc/hostname
	I1026 15:14:29.240877 1131084 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-498531
	
	I1026 15:14:29.240966 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:29.259356 1131084 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:29.259591 1131084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33882 <nil> <nil>}
	I1026 15:14:29.259612 1131084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-498531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-498531/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-498531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:14:29.403508 1131084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:14:29.403544 1131084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:14:29.403575 1131084 ubuntu.go:190] setting up certificates
	I1026 15:14:29.403592 1131084 provision.go:84] configureAuth start
	I1026 15:14:29.403661 1131084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-498531
	I1026 15:14:29.422912 1131084 provision.go:143] copyHostCerts
	I1026 15:14:29.422986 1131084 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:14:29.423000 1131084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:14:29.423089 1131084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:14:29.423243 1131084 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:14:29.423257 1131084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:14:29.423307 1131084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:14:29.423390 1131084 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:14:29.423400 1131084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:14:29.423437 1131084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:14:29.423514 1131084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.calico-498531 san=[127.0.0.1 192.168.94.2 calico-498531 localhost minikube]
	I1026 15:14:29.781033 1131084 provision.go:177] copyRemoteCerts
	I1026 15:14:29.781101 1131084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:14:29.781151 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:29.800111 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:29.903471 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:14:29.923827 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 15:14:29.942992 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:14:29.961590 1131084 provision.go:87] duration metric: took 557.975669ms to configureAuth
	I1026 15:14:29.961625 1131084 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:14:29.961848 1131084 config.go:182] Loaded profile config "calico-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:29.962010 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:29.980686 1131084 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:29.980906 1131084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33882 <nil> <nil>}
	I1026 15:14:29.980922 1131084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:14:30.245480 1131084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:14:30.245506 1131084 machine.go:96] duration metric: took 1.344824059s to provisionDockerMachine
	I1026 15:14:30.245515 1131084 client.go:171] duration metric: took 7.919992759s to LocalClient.Create
	I1026 15:14:30.245532 1131084 start.go:167] duration metric: took 7.920066064s to libmachine.API.Create "calico-498531"
	I1026 15:14:30.245539 1131084 start.go:293] postStartSetup for "calico-498531" (driver="docker")
	I1026 15:14:30.245549 1131084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:14:30.245607 1131084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:14:30.245646 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:30.263433 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:30.367021 1131084 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:14:30.371039 1131084 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:14:30.371075 1131084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:14:30.371086 1131084 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:14:30.371156 1131084 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:14:30.371273 1131084 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:14:30.371413 1131084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:14:30.379997 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:14:30.401834 1131084 start.go:296] duration metric: took 156.256952ms for postStartSetup
	I1026 15:14:30.402266 1131084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-498531
	I1026 15:14:30.421441 1131084 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/config.json ...
	I1026 15:14:30.421726 1131084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:14:30.421776 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:30.441111 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:30.540925 1131084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:14:30.546045 1131084 start.go:128] duration metric: took 8.223251302s to createHost
	I1026 15:14:30.546069 1131084 start.go:83] releasing machines lock for "calico-498531", held for 8.223429226s
	I1026 15:14:30.546143 1131084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-498531
	I1026 15:14:30.564531 1131084 ssh_runner.go:195] Run: cat /version.json
	I1026 15:14:30.564591 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:30.564610 1131084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:14:30.564683 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:30.584193 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:30.584970 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:30.681718 1131084 ssh_runner.go:195] Run: systemctl --version
	I1026 15:14:30.740047 1131084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:14:30.777313 1131084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:14:30.782352 1131084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:14:30.782416 1131084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:14:30.813017 1131084 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:14:30.813046 1131084 start.go:495] detecting cgroup driver to use...
	I1026 15:14:30.813083 1131084 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:14:30.813130 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:14:30.830799 1131084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:14:30.844484 1131084 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:14:30.844543 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:14:30.862378 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:14:30.883515 1131084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:14:30.969532 1131084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:14:31.061732 1131084 docker.go:234] disabling docker service ...
	I1026 15:14:31.061830 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:14:31.081856 1131084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:14:31.095826 1131084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:14:31.181238 1131084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:14:31.267605 1131084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:14:31.281188 1131084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:14:31.296556 1131084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:14:31.296624 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.308522 1131084 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:14:31.308593 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.318361 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.328061 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.338149 1131084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:14:31.347397 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.357082 1131084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.372068 1131084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:31.381727 1131084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:14:31.389780 1131084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:14:31.397805 1131084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:31.483517 1131084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:14:31.597744 1131084 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:14:31.597822 1131084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:14:31.602759 1131084 start.go:563] Will wait 60s for crictl version
	I1026 15:14:31.602815 1131084 ssh_runner.go:195] Run: which crictl
	I1026 15:14:31.607206 1131084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:14:31.632818 1131084 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:14:31.632898 1131084 ssh_runner.go:195] Run: crio --version
	I1026 15:14:31.663087 1131084 ssh_runner.go:195] Run: crio --version
	I1026 15:14:31.694487 1131084 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:14:31.695628 1131084 cli_runner.go:164] Run: docker network inspect calico-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:14:31.714194 1131084 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1026 15:14:31.718733 1131084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:14:31.729794 1131084 kubeadm.go:883] updating cluster {Name:calico-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:14:31.729983 1131084 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:31.730056 1131084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:14:31.764478 1131084 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:14:31.764505 1131084 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:14:31.764565 1131084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:14:31.791829 1131084 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:14:31.791853 1131084 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:14:31.791860 1131084 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1026 15:14:31.791965 1131084 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-498531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1026 15:14:31.792048 1131084 ssh_runner.go:195] Run: crio config
	I1026 15:14:31.844095 1131084 cni.go:84] Creating CNI manager for "calico"
	I1026 15:14:31.844133 1131084 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:14:31.844158 1131084 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-498531 NodeName:calico-498531 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:14:31.844322 1131084 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-498531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:14:31.844393 1131084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:14:31.853309 1131084 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:14:31.853377 1131084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:14:31.862147 1131084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 15:14:31.877248 1131084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:14:31.893399 1131084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1026 15:14:31.907486 1131084 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:14:31.911580 1131084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:14:31.922598 1131084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:32.010841 1131084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:14:32.037983 1131084 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531 for IP: 192.168.94.2
	I1026 15:14:32.038013 1131084 certs.go:195] generating shared ca certs ...
	I1026 15:14:32.038038 1131084 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.038269 1131084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:14:32.038333 1131084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:14:32.038349 1131084 certs.go:257] generating profile certs ...
	I1026 15:14:32.038425 1131084 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.key
	I1026 15:14:32.038450 1131084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.crt with IP's: []
	I1026 15:14:32.312156 1131084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.crt ...
	I1026 15:14:32.312197 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.crt: {Name:mkb1b43c58262db718e8d170148fae0d52eb48ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.312410 1131084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.key ...
	I1026 15:14:32.312425 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/client.key: {Name:mk81742df0d97949e17864edaa296faedfa8e131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.312548 1131084 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key.15cc02cc
	I1026 15:14:32.312572 1131084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt.15cc02cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1026 15:14:32.478187 1131084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt.15cc02cc ...
	I1026 15:14:32.478219 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt.15cc02cc: {Name:mkd6c3652a23627fe5a24bc0bd1949a08592c079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.478410 1131084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key.15cc02cc ...
	I1026 15:14:32.478458 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key.15cc02cc: {Name:mk174b8926c3607c3963f01d1b77c59a2bc6d1ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.478585 1131084 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt.15cc02cc -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt
	I1026 15:14:32.478701 1131084 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key.15cc02cc -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key
	I1026 15:14:32.478800 1131084 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.key
	I1026 15:14:32.478822 1131084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.crt with IP's: []
	I1026 15:14:32.654880 1131084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.crt ...
	I1026 15:14:32.654920 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.crt: {Name:mk84b0d54f743f6528098ad79518e74e2839c1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.655127 1131084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.key ...
	I1026 15:14:32.655148 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.key: {Name:mk66d0d55babfe35bce01e440e581ddbb80f8423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:32.655395 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:14:32.655459 1131084 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:14:32.655476 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:14:32.655510 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:14:32.655546 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:14:32.655582 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:14:32.655643 1131084 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:14:32.656352 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:14:32.675929 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:14:32.694678 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:14:32.712972 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:14:32.731712 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 15:14:32.751416 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:14:32.770380 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:14:32.788707 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/calico-498531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:14:32.807966 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:14:32.828197 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:14:32.847795 1131084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:14:32.868432 1131084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:14:32.882369 1131084 ssh_runner.go:195] Run: openssl version
	I1026 15:14:32.889080 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:14:32.899603 1131084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:14:32.905031 1131084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:14:32.905112 1131084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:14:32.943270 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:14:32.952802 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:14:32.962090 1131084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:14:32.966024 1131084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:14:32.966087 1131084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:14:33.004317 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:14:33.014144 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:14:33.022942 1131084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:33.026969 1131084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:33.027025 1131084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:33.063768 1131084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:14:33.074766 1131084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:14:33.079127 1131084 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:14:33.079208 1131084 kubeadm.go:400] StartCluster: {Name:calico-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:14:33.079307 1131084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:14:33.079389 1131084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:14:33.113757 1131084 cri.go:89] found id: ""
	I1026 15:14:33.113847 1131084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:14:33.123064 1131084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:14:33.131834 1131084 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:14:33.131898 1131084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:14:33.140319 1131084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:14:33.140344 1131084 kubeadm.go:157] found existing configuration files:
	
	I1026 15:14:33.140390 1131084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:14:33.148960 1131084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:14:33.149019 1131084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:14:33.157003 1131084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:14:33.165183 1131084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:14:33.165253 1131084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:14:33.173647 1131084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:14:33.181959 1131084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:14:33.182027 1131084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:14:33.189656 1131084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:14:33.197834 1131084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:14:33.197896 1131084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:14:33.205619 1131084 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:14:33.243805 1131084 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:14:33.244498 1131084 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:14:33.265901 1131084 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:14:33.265994 1131084 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:14:33.266065 1131084 kubeadm.go:318] OS: Linux
	I1026 15:14:33.266196 1131084 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:14:33.266279 1131084 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:14:33.266346 1131084 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:14:33.266438 1131084 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:14:33.266526 1131084 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:14:33.266601 1131084 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:14:33.266699 1131084 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:14:33.266779 1131084 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:14:33.327889 1131084 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:14:33.328064 1131084 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:14:33.328217 1131084 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:14:33.336945 1131084 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:14:33.339010 1131084 out.go:252]   - Generating certificates and keys ...
	I1026 15:14:33.339092 1131084 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:14:33.339190 1131084 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:14:33.406640 1131084 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:14:33.567594 1131084 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:14:33.725919 1131084 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:14:33.844049 1131084 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:14:33.874815 1131084 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:14:33.875341 1131084 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-498531 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1026 15:14:34.176333 1131084 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:14:34.176578 1131084 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-498531 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1026 15:14:34.240669 1131084 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:14:34.477448 1131084 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:14:34.581009 1131084 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:14:34.581250 1131084 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:14:35.129273 1131084 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:14:35.271657 1131084 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:14:35.495983 1131084 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:14:35.623505 1131084 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:14:35.900652 1131084 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:14:35.901283 1131084 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:14:35.905235 1131084 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1026 15:14:31.063275 1122250 node_ready.go:57] node "kindnet-498531" has "Ready":"False" status (will retry)
	I1026 15:14:33.563117 1122250 node_ready.go:49] node "kindnet-498531" is "Ready"
	I1026 15:14:33.563150 1122250 node_ready.go:38] duration metric: took 11.003589447s for node "kindnet-498531" to be "Ready" ...
	I1026 15:14:33.563199 1122250 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:14:33.563266 1122250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:14:33.576858 1122250 api_server.go:72] duration metric: took 11.401224049s to wait for apiserver process to appear ...
	I1026 15:14:33.576891 1122250 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:14:33.576917 1122250 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 15:14:33.581421 1122250 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 15:14:33.582522 1122250 api_server.go:141] control plane version: v1.34.1
	I1026 15:14:33.582565 1122250 api_server.go:131] duration metric: took 5.66448ms to wait for apiserver health ...
	I1026 15:14:33.582575 1122250 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:14:33.586724 1122250 system_pods.go:59] 8 kube-system pods found
	I1026 15:14:33.586771 1122250 system_pods.go:61] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:33.586785 1122250 system_pods.go:61] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:33.586803 1122250 system_pods.go:61] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:33.586809 1122250 system_pods.go:61] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:33.586817 1122250 system_pods.go:61] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:33.586823 1122250 system_pods.go:61] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:33.586830 1122250 system_pods.go:61] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:33.586837 1122250 system_pods.go:61] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:33.586849 1122250 system_pods.go:74] duration metric: took 4.265706ms to wait for pod list to return data ...
	I1026 15:14:33.586866 1122250 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:14:33.589645 1122250 default_sa.go:45] found service account: "default"
	I1026 15:14:33.589670 1122250 default_sa.go:55] duration metric: took 2.793476ms for default service account to be created ...
	I1026 15:14:33.589682 1122250 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:14:33.593125 1122250 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:33.593221 1122250 system_pods.go:89] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:33.593236 1122250 system_pods.go:89] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:33.593245 1122250 system_pods.go:89] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:33.593266 1122250 system_pods.go:89] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:33.593276 1122250 system_pods.go:89] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:33.593282 1122250 system_pods.go:89] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:33.593287 1122250 system_pods.go:89] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:33.593298 1122250 system_pods.go:89] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:33.593330 1122250 retry.go:31] will retry after 280.883566ms: missing components: kube-dns
	I1026 15:14:33.878127 1122250 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:33.878155 1122250 system_pods.go:89] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:33.878175 1122250 system_pods.go:89] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:33.878199 1122250 system_pods.go:89] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:33.878209 1122250 system_pods.go:89] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:33.878213 1122250 system_pods.go:89] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:33.878216 1122250 system_pods.go:89] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:33.878220 1122250 system_pods.go:89] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:33.878234 1122250 system_pods.go:89] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:33.878250 1122250 retry.go:31] will retry after 306.998461ms: missing components: kube-dns
	I1026 15:14:34.189580 1122250 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:34.189611 1122250 system_pods.go:89] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:34.189616 1122250 system_pods.go:89] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:34.189623 1122250 system_pods.go:89] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:34.189626 1122250 system_pods.go:89] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:34.189629 1122250 system_pods.go:89] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:34.189633 1122250 system_pods.go:89] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:34.189637 1122250 system_pods.go:89] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:34.189641 1122250 system_pods.go:89] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:34.189657 1122250 retry.go:31] will retry after 346.706947ms: missing components: kube-dns
	I1026 15:14:34.541594 1122250 system_pods.go:86] 8 kube-system pods found
	I1026 15:14:34.541629 1122250 system_pods.go:89] "coredns-66bc5c9577-95sqq" [93f4d686-0c06-48e6-9ccc-c07225beb1ed] Running
	I1026 15:14:34.541637 1122250 system_pods.go:89] "etcd-kindnet-498531" [f334ddf9-c0c2-4af0-88ec-0b5427d4942d] Running
	I1026 15:14:34.541643 1122250 system_pods.go:89] "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
	I1026 15:14:34.541647 1122250 system_pods.go:89] "kube-apiserver-kindnet-498531" [8fe7749a-ce9c-483d-aee5-25070cebf447] Running
	I1026 15:14:34.541651 1122250 system_pods.go:89] "kube-controller-manager-kindnet-498531" [628ba3a0-0a73-45fd-beee-d7de8002c3df] Running
	I1026 15:14:34.541657 1122250 system_pods.go:89] "kube-proxy-8jlfc" [c04308f6-8f4d-42e8-b3ac-e31c28be9148] Running
	I1026 15:14:34.541662 1122250 system_pods.go:89] "kube-scheduler-kindnet-498531" [15912c58-5a96-49ae-a262-e54309fc9b02] Running
	I1026 15:14:34.541666 1122250 system_pods.go:89] "storage-provisioner" [f38dbe78-c498-4c89-b60b-0c8a7acc5eea] Running
	I1026 15:14:34.541677 1122250 system_pods.go:126] duration metric: took 951.988355ms to wait for k8s-apps to be running ...
	I1026 15:14:34.541690 1122250 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:14:34.541750 1122250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:14:34.555570 1122250 system_svc.go:56] duration metric: took 13.867572ms WaitForService to wait for kubelet
	I1026 15:14:34.555604 1122250 kubeadm.go:586] duration metric: took 12.379977538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:14:34.555624 1122250 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:14:34.558916 1122250 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 15:14:34.558944 1122250 node_conditions.go:123] node cpu capacity is 8
	I1026 15:14:34.558958 1122250 node_conditions.go:105] duration metric: took 3.329095ms to run NodePressure ...
	I1026 15:14:34.558970 1122250 start.go:241] waiting for startup goroutines ...
	I1026 15:14:34.558977 1122250 start.go:246] waiting for cluster config update ...
	I1026 15:14:34.558987 1122250 start.go:255] writing updated cluster config ...
	I1026 15:14:34.559299 1122250 ssh_runner.go:195] Run: rm -f paused
	I1026 15:14:34.563782 1122250 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:34.567670 1122250 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-95sqq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.572469 1122250 pod_ready.go:94] pod "coredns-66bc5c9577-95sqq" is "Ready"
	I1026 15:14:34.572496 1122250 pod_ready.go:86] duration metric: took 4.797495ms for pod "coredns-66bc5c9577-95sqq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.574819 1122250 pod_ready.go:83] waiting for pod "etcd-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.579391 1122250 pod_ready.go:94] pod "etcd-kindnet-498531" is "Ready"
	I1026 15:14:34.579418 1122250 pod_ready.go:86] duration metric: took 4.57146ms for pod "etcd-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.581636 1122250 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.586229 1122250 pod_ready.go:94] pod "kube-apiserver-kindnet-498531" is "Ready"
	I1026 15:14:34.586254 1122250 pod_ready.go:86] duration metric: took 4.595986ms for pod "kube-apiserver-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.588554 1122250 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:34.968445 1122250 pod_ready.go:94] pod "kube-controller-manager-kindnet-498531" is "Ready"
	I1026 15:14:34.968480 1122250 pod_ready.go:86] duration metric: took 379.902384ms for pod "kube-controller-manager-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:35.168998 1122250 pod_ready.go:83] waiting for pod "kube-proxy-8jlfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:35.568757 1122250 pod_ready.go:94] pod "kube-proxy-8jlfc" is "Ready"
	I1026 15:14:35.568788 1122250 pod_ready.go:86] duration metric: took 399.752935ms for pod "kube-proxy-8jlfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:35.768804 1122250 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:36.169001 1122250 pod_ready.go:94] pod "kube-scheduler-kindnet-498531" is "Ready"
	I1026 15:14:36.169041 1122250 pod_ready.go:86] duration metric: took 400.205487ms for pod "kube-scheduler-kindnet-498531" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:36.169057 1122250 pod_ready.go:40] duration metric: took 1.605242517s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:36.217396 1122250 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:14:36.220298 1122250 out.go:179] * Done! kubectl is now configured to use "kindnet-498531" cluster and "default" namespace by default
	W1026 15:14:32.084499 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:34.584900 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:35.906556 1131084 out.go:252]   - Booting up control plane ...
	I1026 15:14:35.906673 1131084 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:14:35.906764 1131084 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:14:35.907439 1131084 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:14:35.921886 1131084 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:14:35.922023 1131084 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:14:35.930452 1131084 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:14:35.930988 1131084 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:14:35.931055 1131084 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:14:36.036435 1131084 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:14:36.036605 1131084 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:14:37.037289 1131084 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000949658s
	I1026 15:14:37.041135 1131084 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:14:37.041284 1131084 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1026 15:14:37.041405 1131084 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:14:37.041487 1131084 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:14:38.319880 1131084 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.278658704s
	I1026 15:14:39.270188 1131084 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.229050699s
	I1026 15:14:41.043344 1131084 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002206371s
	I1026 15:14:41.056408 1131084 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:14:41.068238 1131084 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:14:41.079307 1131084 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:14:41.079573 1131084 kubeadm.go:318] [mark-control-plane] Marking the node calico-498531 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:14:41.089307 1131084 kubeadm.go:318] [bootstrap-token] Using token: pczui7.w2njk3cqoj9o8jb1
	
	
	==> CRI-O <==
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.371883338Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bb3977ec506badb0ae29a644f519871fae1c030f4d2dddd78425bcbddff1f3be/merged/etc/passwd: no such file or directory"
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.371929173Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bb3977ec506badb0ae29a644f519871fae1c030f4d2dddd78425bcbddff1f3be/merged/etc/group: no such file or directory"
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.372272495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.414251681Z" level=info msg="Created container 239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7: kube-system/storage-provisioner/storage-provisioner" id=a68eb85a-3276-4e30-a8dd-9b5ffd46da1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.415679089Z" level=info msg="Starting container: 239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7" id=9b813a82-e0a3-4e35-b538-e12831b46ebe name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:14:12 embed-certs-535130 crio[570]: time="2025-10-26T15:14:12.419926346Z" level=info msg="Started container" PID=1712 containerID=239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7 description=kube-system/storage-provisioner/storage-provisioner id=9b813a82-e0a3-4e35-b538-e12831b46ebe name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d181ec3b3c584db592a440aaecf49bdf46c00f0787eeed83260a5822d7e015e
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.175596644Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.183074584Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.183110921Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.183128704Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.194653425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.194842346Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.195100225Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.203747241Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.204291277Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.205252123Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.212590893Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 15:14:22 embed-certs-535130 crio[570]: time="2025-10-26T15:14:22.212625466Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.203130385Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b044f21a-bb6b-456c-8b96-e37816b86b3b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.204300287Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8fd08f86-4dd9-478e-83bc-b520c479292b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.205543752Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9/dashboard-metrics-scraper" id=1d8946c6-cdab-4b07-9f86-67d901e9ee7f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.205720317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.212191342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.212883554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:37 embed-certs-535130 crio[570]: time="2025-10-26T15:14:37.304367133Z" level=info msg="CreateCtr: context was either canceled or the deadline was exceeded: context canceled" id=1d8946c6-cdab-4b07-9f86-67d901e9ee7f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	239e148b0a6d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           29 seconds ago       Running             storage-provisioner         1                   8d181ec3b3c58       storage-provisioner                          kube-system
	3102e249f41ed       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           34 seconds ago       Exited              dashboard-metrics-scraper   2                   360206fcbb1bf       dashboard-metrics-scraper-6ffb444bf9-ld9k9   kubernetes-dashboard
	c2406044b7f31       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   69575968b6bcf       kubernetes-dashboard-855c9754f9-8p6g2        kubernetes-dashboard
	043a8fb5117eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   ee5ea96ac09da       busybox                                      default
	0e893e41892fa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           About a minute ago   Running             coredns                     0                   6624abc8e9db7       coredns-66bc5c9577-pnbct                     kube-system
	4cd2c8e35ef08       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           About a minute ago   Running             kube-proxy                  0                   e832114b001be       kube-proxy-nbr2d                             kube-system
	5e1d1087d88f6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           About a minute ago   Running             kindnet-cni                 0                   b4fe48d1eaf92       kindnet-mlqjm                                kube-system
	fd864c01850c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         0                   8d181ec3b3c58       storage-provisioner                          kube-system
	79f294b1af537       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   f30050e954087       kube-apiserver-embed-certs-535130            kube-system
	0cf664b8ea8fd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   13eb1fc659970       kube-controller-manager-embed-certs-535130   kube-system
	43565d9e19139       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   91b9612dd56a7       kube-scheduler-embed-certs-535130            kube-system
	7f30d07b339ab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   6cbb7d507b7ef       etcd-embed-certs-535130                      kube-system
	
	
	==> coredns [0e893e41892fa12c7ec68b76a502b7a243a84d94912ec68bf8757235766702b0] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39199 - 17047 "HINFO IN 1070022183893654826.2872744642068059383. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.077651463s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-535130
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-535130
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=embed-certs-535130
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_12_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:12:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-535130
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:14:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:14:21 +0000   Sun, 26 Oct 2025 15:12:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:14:21 +0000   Sun, 26 Oct 2025 15:12:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:14:21 +0000   Sun, 26 Oct 2025 15:12:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:14:21 +0000   Sun, 26 Oct 2025 15:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-535130
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d2eb1dd1-3767-46c2-b62f-7198c6aeeadd
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-pnbct                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-embed-certs-535130                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-mlqjm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-embed-certs-535130             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-embed-certs-535130    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-nbr2d                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-embed-certs-535130             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ld9k9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8p6g2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node embed-certs-535130 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node embed-certs-535130 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node embed-certs-535130 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node embed-certs-535130 event: Registered Node embed-certs-535130 in Controller
	  Normal  NodeReady                102s               kubelet          Node embed-certs-535130 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node embed-certs-535130 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node embed-certs-535130 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node embed-certs-535130 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                node-controller  Node embed-certs-535130 event: Registered Node embed-certs-535130 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [7f30d07b339ab7331f72cd45f5f34ee9c7eb82bec1197a77db9c34d2fcb6c24b] <==
	{"level":"warn","ts":"2025-10-26T15:13:39.756233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.763132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.770089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.780861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.788619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.796046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.804112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.812807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.821587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.830668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.838417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.855577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.863025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.871140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.879509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.887455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.895062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.913155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.921460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.930321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:13:39.982724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44424","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T15:14:26.329610Z","caller":"traceutil/trace.go:172","msg":"trace[1876156884] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"123.588948ms","start":"2025-10-26T15:14:26.206002Z","end":"2025-10-26T15:14:26.329591Z","steps":["trace[1876156884] 'process raft request'  (duration: 123.461118ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:14:27.504731Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.434489ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-26T15:14:27.504889Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.530082ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356222968320785 > lease_revoke:<id:59069a21150bd27e>","response":"size:28"}
	{"level":"info","ts":"2025-10-26T15:14:27.504935Z","caller":"traceutil/trace.go:172","msg":"trace[1520001001] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:634; }","duration":"128.666954ms","start":"2025-10-26T15:14:27.376249Z","end":"2025-10-26T15:14:27.504916Z","steps":["trace[1520001001] 'range keys from in-memory index tree'  (duration: 128.414716ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:14:41 up  2:57,  0 user,  load average: 4.70, 3.23, 2.06
	Linux embed-certs-535130 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e1d1087d88f63dfa08475c5c3d49f7e0a5ce8b0ccdf279101ffe4c56c135534] <==
	I1026 15:13:41.876837       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:13:41.969311       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 15:13:41.969732       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:13:41.969757       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:13:41.969785       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:13:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:13:42.175371       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:13:42.175398       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:13:42.175409       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:13:42.176441       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1026 15:14:12.176512       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 15:14:12.176632       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 15:14:12.176678       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1026 15:14:12.176686       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1026 15:14:13.775585       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:14:13.775637       1 metrics.go:72] Registering metrics
	I1026 15:14:13.775730       1 controller.go:711] "Syncing nftables rules"
	I1026 15:14:22.175052       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:14:22.175119       1 main.go:301] handling current node
	I1026 15:14:32.182876       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 15:14:32.182914       1 main.go:301] handling current node
	
	
	==> kube-apiserver [79f294b1af5377dbbe09bff36c0ce752c337fff26f468f52ba372eeae7c2fbd7] <==
	I1026 15:13:40.534093       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:13:40.534297       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:13:40.534399       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:13:40.534607       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 15:13:40.534652       1 aggregator.go:171] initial CRD sync complete...
	I1026 15:13:40.534668       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:13:40.534675       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:13:40.534676       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:13:40.534682       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:13:40.534697       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 15:13:40.543683       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1026 15:13:40.551714       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:13:40.568594       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:13:40.578709       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:13:40.837457       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:13:40.866505       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:13:40.886462       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:13:40.896365       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:13:40.903139       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:13:40.938873       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.85.9"}
	I1026 15:13:40.949675       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.161.127"}
	I1026 15:13:41.442846       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:13:44.289523       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:13:44.390682       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:13:44.439367       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0cf664b8ea8fd4397a4e4d0903d086cb617b472ad1631050bc542a9e5c06ca09] <==
	I1026 15:13:43.885593       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:13:43.885744       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:13:43.885812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:13:43.885825       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:13:43.885835       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:13:43.885958       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:13:43.886099       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 15:13:43.886115       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 15:13:43.886303       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 15:13:43.886317       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 15:13:43.886331       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:13:43.886509       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:13:43.888238       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:13:43.890409       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:13:43.890533       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:13:43.890608       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-535130"
	I1026 15:13:43.890679       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:13:43.893190       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:13:43.893255       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 15:13:43.895425       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:13:43.895453       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:13:43.897553       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 15:13:43.902200       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:13:43.904536       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 15:13:43.907713       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4cd2c8e35ef08093cad19d86eb698b67b7f3efc33cc6e0f1b1f9e57148715d1d] <==
	I1026 15:13:41.785324       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:13:41.873276       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:13:41.975128       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:13:41.975204       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 15:13:41.975285       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:13:42.000826       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:13:42.000915       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:13:42.008018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:13:42.008661       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:13:42.008957       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:42.013308       1 config.go:200] "Starting service config controller"
	I1026 15:13:42.014755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:13:42.014505       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:13:42.014889       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:13:42.014518       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:13:42.014942       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:13:42.014484       1 config.go:309] "Starting node config controller"
	I1026 15:13:42.014954       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:13:42.014960       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:13:42.115113       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:13:42.115202       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:13:42.115260       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [43565d9e1913984f12b45a1203fca769c7b760ccf18830408972ff108c39b9bf] <==
	I1026 15:13:39.739608       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:13:40.449193       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:13:40.449228       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:13:40.449240       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:13:40.449250       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:13:40.500577       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:13:40.500617       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:13:40.505923       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:13:40.506128       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:13:40.509611       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:13:40.509983       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:13:40.608859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:13:44 embed-certs-535130 kubelet[732]: I1026 15:13:44.602901     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2gvz\" (UniqueName: \"kubernetes.io/projected/70961b5a-99b2-49fb-ab32-0ea0c0780577-kube-api-access-d2gvz\") pod \"dashboard-metrics-scraper-6ffb444bf9-ld9k9\" (UID: \"70961b5a-99b2-49fb-ab32-0ea0c0780577\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9"
	Oct 26 15:13:44 embed-certs-535130 kubelet[732]: I1026 15:13:44.602945     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbv4q\" (UniqueName: \"kubernetes.io/projected/0a6afa02-36ae-4637-8893-3f91d7a0fa0e-kube-api-access-fbv4q\") pod \"kubernetes-dashboard-855c9754f9-8p6g2\" (UID: \"0a6afa02-36ae-4637-8893-3f91d7a0fa0e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8p6g2"
	Oct 26 15:13:47 embed-certs-535130 kubelet[732]: I1026 15:13:47.265961     732 scope.go:117] "RemoveContainer" containerID="d720186b30651c6b7eb83d4718ca99b3b5e2e3982338e87a3f38eec6fb1541b5"
	Oct 26 15:13:48 embed-certs-535130 kubelet[732]: I1026 15:13:48.271401     732 scope.go:117] "RemoveContainer" containerID="d720186b30651c6b7eb83d4718ca99b3b5e2e3982338e87a3f38eec6fb1541b5"
	Oct 26 15:13:48 embed-certs-535130 kubelet[732]: I1026 15:13:48.271577     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:13:48 embed-certs-535130 kubelet[732]: E1026 15:13:48.271780     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:13:49 embed-certs-535130 kubelet[732]: I1026 15:13:49.276118     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:13:49 embed-certs-535130 kubelet[732]: E1026 15:13:49.276357     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:13:51 embed-certs-535130 kubelet[732]: I1026 15:13:51.300329     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8p6g2" podStartSLOduration=1.193335998 podStartE2EDuration="7.30029739s" podCreationTimestamp="2025-10-26 15:13:44 +0000 UTC" firstStartedPulling="2025-10-26 15:13:44.829624028 +0000 UTC m=+6.732124344" lastFinishedPulling="2025-10-26 15:13:50.936585485 +0000 UTC m=+12.839085736" observedRunningTime="2025-10-26 15:13:51.298409894 +0000 UTC m=+13.200910154" watchObservedRunningTime="2025-10-26 15:13:51.30029739 +0000 UTC m=+13.202797649"
	Oct 26 15:13:52 embed-certs-535130 kubelet[732]: I1026 15:13:52.725673     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:13:52 embed-certs-535130 kubelet[732]: E1026 15:13:52.725910     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:14:07 embed-certs-535130 kubelet[732]: I1026 15:14:07.202315     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:14:07 embed-certs-535130 kubelet[732]: I1026 15:14:07.328504     732 scope.go:117] "RemoveContainer" containerID="d9cb2be8dd51f75845758f5bfc36df5249c9c168e67bf68e57673d40b22797d5"
	Oct 26 15:14:07 embed-certs-535130 kubelet[732]: I1026 15:14:07.328751     732 scope.go:117] "RemoveContainer" containerID="3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	Oct 26 15:14:07 embed-certs-535130 kubelet[732]: E1026 15:14:07.329364     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:14:12 embed-certs-535130 kubelet[732]: I1026 15:14:12.357023     732 scope.go:117] "RemoveContainer" containerID="fd864c01850c3a39fcff70d2a1c10ffa508c1d4673cb99b9ac1d5cb6d772026e"
	Oct 26 15:14:12 embed-certs-535130 kubelet[732]: I1026 15:14:12.725916     732 scope.go:117] "RemoveContainer" containerID="3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	Oct 26 15:14:12 embed-certs-535130 kubelet[732]: E1026 15:14:12.726135     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:14:26 embed-certs-535130 kubelet[732]: I1026 15:14:26.202153     732 scope.go:117] "RemoveContainer" containerID="3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	Oct 26 15:14:26 embed-certs-535130 kubelet[732]: E1026 15:14:26.202447     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ld9k9_kubernetes-dashboard(70961b5a-99b2-49fb-ab32-0ea0c0780577)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ld9k9" podUID="70961b5a-99b2-49fb-ab32-0ea0c0780577"
	Oct 26 15:14:37 embed-certs-535130 kubelet[732]: I1026 15:14:37.202493     732 scope.go:117] "RemoveContainer" containerID="3102e249f41ed7e55df37fbb93359807120c0dd5cf37e7ee6fdf6e1c85f14410"
	Oct 26 15:14:37 embed-certs-535130 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:14:37 embed-certs-535130 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:14:37 embed-certs-535130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:14:37 embed-certs-535130 systemd[1]: kubelet.service: Consumed 1.891s CPU time.
	
	
	==> kubernetes-dashboard [c2406044b7f315c5b1ee3f4019f3a406d40d7ef84f78714460b6156504465324] <==
	2025/10/26 15:13:51 Starting overwatch
	2025/10/26 15:13:51 Using namespace: kubernetes-dashboard
	2025/10/26 15:13:51 Using in-cluster config to connect to apiserver
	2025/10/26 15:13:51 Using secret token for csrf signing
	2025/10/26 15:13:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:13:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:13:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:13:51 Generating JWE encryption key
	2025/10/26 15:13:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:13:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:13:51 Initializing JWE encryption key from synchronized object
	2025/10/26 15:13:51 Creating in-cluster Sidecar client
	2025/10/26 15:13:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:13:51 Serving insecurely on HTTP port: 9090
	2025/10/26 15:14:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [239e148b0a6d4ade1fbee745dd81f15d67ba591399800ea09cf65541f7517cf7] <==
	W1026 15:14:12.455152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:15.909878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:20.173152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:23.771594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:26.825514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:29.847852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:29.852425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:14:29.852591       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:14:29.852736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-535130_e6c1962c-4ba7-47de-82fa-3cd0dbdc059b!
	I1026 15:14:29.852711       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f6cf1c0-4446-438f-959a-2cf0430f7cb8", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-535130_e6c1962c-4ba7-47de-82fa-3cd0dbdc059b became leader
	W1026 15:14:29.854933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:29.858810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:14:29.953839       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-535130_e6c1962c-4ba7-47de-82fa-3cd0dbdc059b!
	W1026 15:14:31.862618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:31.866942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:33.870919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:33.879657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:35.882735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:35.886590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:37.889613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:37.894971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:39.898056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:39.902098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:41.905343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:41.909414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fd864c01850c3a39fcff70d2a1c10ffa508c1d4673cb99b9ac1d5cb6d772026e] <==
	I1026 15:13:41.681436       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:14:11.685702       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-535130 -n embed-certs-535130
I1026 15:14:42.580509  845095 config.go:182] Loaded profile config "kindnet-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-535130 -n embed-certs-535130: exit status 2 (398.875475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-535130 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-790012 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-790012 --alsologtostderr -v=1: exit status 80 (2.586773218s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-790012 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:15:03.639777 1141494 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:15:03.640067 1141494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:15:03.640078 1141494 out.go:374] Setting ErrFile to fd 2...
	I1026 15:15:03.640082 1141494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:15:03.640330 1141494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:15:03.640589 1141494 out.go:368] Setting JSON to false
	I1026 15:15:03.640637 1141494 mustload.go:65] Loading cluster: default-k8s-diff-port-790012
	I1026 15:15:03.640973 1141494 config.go:182] Loaded profile config "default-k8s-diff-port-790012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:15:03.641388 1141494 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-790012 --format={{.State.Status}}
	I1026 15:15:03.660498 1141494 host.go:66] Checking if "default-k8s-diff-port-790012" exists ...
	I1026 15:15:03.660828 1141494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:15:03.729386 1141494 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-26 15:15:03.715534297 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:15:03.730004 1141494 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-790012 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 15:15:03.735757 1141494 out.go:179] * Pausing node default-k8s-diff-port-790012 ... 
	I1026 15:15:03.737441 1141494 host.go:66] Checking if "default-k8s-diff-port-790012" exists ...
	I1026 15:15:03.737787 1141494 ssh_runner.go:195] Run: systemctl --version
	I1026 15:15:03.737849 1141494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-790012
	I1026 15:15:03.759401 1141494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33877 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/default-k8s-diff-port-790012/id_rsa Username:docker}
	I1026 15:15:03.868100 1141494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:15:03.894183 1141494 pause.go:52] kubelet running: true
	I1026 15:15:03.894253 1141494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:15:04.084391 1141494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:15:04.084493 1141494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:15:04.163244 1141494 cri.go:89] found id: "a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1"
	I1026 15:15:04.163276 1141494 cri.go:89] found id: "cdcd33a110ab72d97c137eac4a12dab06a6293ca167a79ea2a1ec28b0b18ccdc"
	I1026 15:15:04.163280 1141494 cri.go:89] found id: "340c4006e10f18fc87ad00cf77d818fadf1aab8a4c9b92d33498730d7f4e711d"
	I1026 15:15:04.163283 1141494 cri.go:89] found id: "86dd13cec7ebd6e740152fe44eb9f68d18517a514d6d9e9b154243c9372b9e3e"
	I1026 15:15:04.163285 1141494 cri.go:89] found id: "cffe05dde621ab9582c7dd3cc9f6894fcec1d0b54f1ed7baf19f6154e397b609"
	I1026 15:15:04.163288 1141494 cri.go:89] found id: "a2d02679a51ed33ad3086b27a58279d82b4d1c6bd035050764df771a3b17cf2c"
	I1026 15:15:04.163291 1141494 cri.go:89] found id: "facf1cc394076aaa508c872a3c8c00a3efde72f036be55b7af624017d37ce6a3"
	I1026 15:15:04.163293 1141494 cri.go:89] found id: "35d0d03944a78ecf21c8c3291224fdd9f405cd21a6e29cd4d3096bc1744575bb"
	I1026 15:15:04.163295 1141494 cri.go:89] found id: "8aa809c39193fbb83582e34b6983bd3f1e5fe7760c1faafff728462dd1913646"
	I1026 15:15:04.163309 1141494 cri.go:89] found id: "5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	I1026 15:15:04.163313 1141494 cri.go:89] found id: "f7bce916e5757f41f13bbf128728404ae709bb2ac55795cf3f137d9120b46fdf"
	I1026 15:15:04.163317 1141494 cri.go:89] found id: ""
	I1026 15:15:04.163377 1141494 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:15:04.179231 1141494 retry.go:31] will retry after 361.870525ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:15:04Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:15:04.541864 1141494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:15:04.558315 1141494 pause.go:52] kubelet running: false
	I1026 15:15:04.558407 1141494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:15:04.739100 1141494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:15:04.739225 1141494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:15:04.818554 1141494 cri.go:89] found id: "a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1"
	I1026 15:15:04.818579 1141494 cri.go:89] found id: "cdcd33a110ab72d97c137eac4a12dab06a6293ca167a79ea2a1ec28b0b18ccdc"
	I1026 15:15:04.818586 1141494 cri.go:89] found id: "340c4006e10f18fc87ad00cf77d818fadf1aab8a4c9b92d33498730d7f4e711d"
	I1026 15:15:04.818591 1141494 cri.go:89] found id: "86dd13cec7ebd6e740152fe44eb9f68d18517a514d6d9e9b154243c9372b9e3e"
	I1026 15:15:04.818595 1141494 cri.go:89] found id: "cffe05dde621ab9582c7dd3cc9f6894fcec1d0b54f1ed7baf19f6154e397b609"
	I1026 15:15:04.818600 1141494 cri.go:89] found id: "a2d02679a51ed33ad3086b27a58279d82b4d1c6bd035050764df771a3b17cf2c"
	I1026 15:15:04.818605 1141494 cri.go:89] found id: "facf1cc394076aaa508c872a3c8c00a3efde72f036be55b7af624017d37ce6a3"
	I1026 15:15:04.818610 1141494 cri.go:89] found id: "35d0d03944a78ecf21c8c3291224fdd9f405cd21a6e29cd4d3096bc1744575bb"
	I1026 15:15:04.818614 1141494 cri.go:89] found id: "8aa809c39193fbb83582e34b6983bd3f1e5fe7760c1faafff728462dd1913646"
	I1026 15:15:04.818621 1141494 cri.go:89] found id: "5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	I1026 15:15:04.818626 1141494 cri.go:89] found id: "f7bce916e5757f41f13bbf128728404ae709bb2ac55795cf3f137d9120b46fdf"
	I1026 15:15:04.818640 1141494 cri.go:89] found id: ""
	I1026 15:15:04.818689 1141494 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:15:04.833210 1141494 retry.go:31] will retry after 326.096842ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:15:04Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:15:05.159713 1141494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:15:05.175225 1141494 pause.go:52] kubelet running: false
	I1026 15:15:05.175300 1141494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:15:05.396128 1141494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:15:05.396263 1141494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:15:05.472248 1141494 cri.go:89] found id: "a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1"
	I1026 15:15:05.472270 1141494 cri.go:89] found id: "cdcd33a110ab72d97c137eac4a12dab06a6293ca167a79ea2a1ec28b0b18ccdc"
	I1026 15:15:05.472275 1141494 cri.go:89] found id: "340c4006e10f18fc87ad00cf77d818fadf1aab8a4c9b92d33498730d7f4e711d"
	I1026 15:15:05.472279 1141494 cri.go:89] found id: "86dd13cec7ebd6e740152fe44eb9f68d18517a514d6d9e9b154243c9372b9e3e"
	I1026 15:15:05.472283 1141494 cri.go:89] found id: "cffe05dde621ab9582c7dd3cc9f6894fcec1d0b54f1ed7baf19f6154e397b609"
	I1026 15:15:05.472288 1141494 cri.go:89] found id: "a2d02679a51ed33ad3086b27a58279d82b4d1c6bd035050764df771a3b17cf2c"
	I1026 15:15:05.472292 1141494 cri.go:89] found id: "facf1cc394076aaa508c872a3c8c00a3efde72f036be55b7af624017d37ce6a3"
	I1026 15:15:05.472296 1141494 cri.go:89] found id: "35d0d03944a78ecf21c8c3291224fdd9f405cd21a6e29cd4d3096bc1744575bb"
	I1026 15:15:05.472299 1141494 cri.go:89] found id: "8aa809c39193fbb83582e34b6983bd3f1e5fe7760c1faafff728462dd1913646"
	I1026 15:15:05.472307 1141494 cri.go:89] found id: "5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	I1026 15:15:05.472311 1141494 cri.go:89] found id: "f7bce916e5757f41f13bbf128728404ae709bb2ac55795cf3f137d9120b46fdf"
	I1026 15:15:05.472316 1141494 cri.go:89] found id: ""
	I1026 15:15:05.472364 1141494 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:15:05.485306 1141494 retry.go:31] will retry after 357.927149ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:15:05Z" level=error msg="open /run/runc: no such file or directory"
	I1026 15:15:05.843907 1141494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:15:05.861883 1141494 pause.go:52] kubelet running: false
	I1026 15:15:05.861959 1141494 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 15:15:06.049615 1141494 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 15:15:06.049724 1141494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 15:15:06.124053 1141494 cri.go:89] found id: "a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1"
	I1026 15:15:06.124079 1141494 cri.go:89] found id: "cdcd33a110ab72d97c137eac4a12dab06a6293ca167a79ea2a1ec28b0b18ccdc"
	I1026 15:15:06.124085 1141494 cri.go:89] found id: "340c4006e10f18fc87ad00cf77d818fadf1aab8a4c9b92d33498730d7f4e711d"
	I1026 15:15:06.124089 1141494 cri.go:89] found id: "86dd13cec7ebd6e740152fe44eb9f68d18517a514d6d9e9b154243c9372b9e3e"
	I1026 15:15:06.124093 1141494 cri.go:89] found id: "cffe05dde621ab9582c7dd3cc9f6894fcec1d0b54f1ed7baf19f6154e397b609"
	I1026 15:15:06.124098 1141494 cri.go:89] found id: "a2d02679a51ed33ad3086b27a58279d82b4d1c6bd035050764df771a3b17cf2c"
	I1026 15:15:06.124102 1141494 cri.go:89] found id: "facf1cc394076aaa508c872a3c8c00a3efde72f036be55b7af624017d37ce6a3"
	I1026 15:15:06.124105 1141494 cri.go:89] found id: "35d0d03944a78ecf21c8c3291224fdd9f405cd21a6e29cd4d3096bc1744575bb"
	I1026 15:15:06.124109 1141494 cri.go:89] found id: "8aa809c39193fbb83582e34b6983bd3f1e5fe7760c1faafff728462dd1913646"
	I1026 15:15:06.124128 1141494 cri.go:89] found id: "5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	I1026 15:15:06.124132 1141494 cri.go:89] found id: "f7bce916e5757f41f13bbf128728404ae709bb2ac55795cf3f137d9120b46fdf"
	I1026 15:15:06.124135 1141494 cri.go:89] found id: ""
	I1026 15:15:06.124220 1141494 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 15:15:06.139747 1141494 out.go:203] 
	W1026 15:15:06.141090 1141494 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:15:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T15:15:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 15:15:06.141108 1141494 out.go:285] * 
	* 
	W1026 15:15:06.147110 1141494 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 15:15:06.149722 1141494 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-790012 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-790012
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-790012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a",
	        "Created": "2025-10-26T15:12:52.819696195Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1123518,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:14:01.79961134Z",
	            "FinishedAt": "2025-10-26T15:14:00.285285428Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/hosts",
	        "LogPath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a-json.log",
	        "Name": "/default-k8s-diff-port-790012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-790012:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-790012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a",
	                "LowerDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-790012",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-790012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-790012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-790012",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-790012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7641d74e084bc2cb5e05c645147115df4e3228d6a080ebff9eccae99b1456abf",
	            "SandboxKey": "/var/run/docker/netns/7641d74e084b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-790012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:03:3e:9a:18:08",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb8db690bfd734c5a8c0b627f3759fdde408bba40a95fd914967f52dd3a0e0bf",
	                    "EndpointID": "0e43cd60c16fd7e3c37003f3ad9137d27bb9c1ede1cfa30f4f7e90f7462303a4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-790012",
	                        "f2c26d088cf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012: exit status 2 (403.308475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-790012 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-790012 logs -n 25: (1.763515739s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p custom-flannel-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-498531        │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/nsswitch.conf                                                                                                                      │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/hosts                                                                                                                              │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/resolv.conf                                                                                                                        │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p kindnet-498531 sudo crictl pods                                                                                                                                 │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p kindnet-498531 sudo crictl ps --all                                                                                                                             │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:14 UTC │ 26 Oct 25 15:14 UTC │
	│ ssh     │ -p kindnet-498531 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo ip a s                                                                                                                                      │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo ip r s                                                                                                                                      │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo iptables-save                                                                                                                               │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo iptables -t nat -L -n -v                                                                                                                    │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo systemctl status kubelet --all --full --no-pager                                                                                            │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo systemctl cat kubelet --no-pager                                                                                                            │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ image   │ default-k8s-diff-port-790012 image list --format=json                                                                                                              │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                             │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ pause   │ -p default-k8s-diff-port-790012 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/kubernetes/kubelet.conf                                                                                                            │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /var/lib/kubelet/config.yaml                                                                                                            │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo systemctl status docker --all --full --no-pager                                                                                             │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo docker system info                                                                                                                          │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo systemctl cat cri-docker --no-pager                                                                                                         │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                    │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:14:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:14:46.418049 1136694 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:14:46.418363 1136694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:46.418372 1136694 out.go:374] Setting ErrFile to fd 2...
	I1026 15:14:46.418376 1136694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:46.418596 1136694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:14:46.419084 1136694 out.go:368] Setting JSON to false
	I1026 15:14:46.420367 1136694 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10634,"bootTime":1761481052,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:14:46.420482 1136694 start.go:141] virtualization: kvm guest
	I1026 15:14:46.422589 1136694 out.go:179] * [custom-flannel-498531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:14:46.423907 1136694 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:14:46.423917 1136694 notify.go:220] Checking for updates...
	I1026 15:14:46.426265 1136694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:14:46.427557 1136694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:14:46.428739 1136694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:14:46.430067 1136694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:14:46.431375 1136694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:14:46.433224 1136694 config.go:182] Loaded profile config "calico-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:46.433327 1136694 config.go:182] Loaded profile config "default-k8s-diff-port-790012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:46.433394 1136694 config.go:182] Loaded profile config "kindnet-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:46.433489 1136694 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:14:46.458059 1136694 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:14:46.458190 1136694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:46.521520 1136694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:14:46.508722717 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:14:46.521645 1136694 docker.go:318] overlay module found
	I1026 15:14:46.523535 1136694 out.go:179] * Using the docker driver based on user configuration
	W1026 15:14:41.584618 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:43.586949 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:46.084436 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:46.524856 1136694 start.go:305] selected driver: docker
	I1026 15:14:46.524873 1136694 start.go:925] validating driver "docker" against <nil>
	I1026 15:14:46.524885 1136694 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:14:46.525533 1136694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:46.583777 1136694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:14:46.572424831 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:14:46.583986 1136694 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:14:46.584343 1136694 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:14:46.586150 1136694 out.go:179] * Using Docker driver with root privileges
	I1026 15:14:46.587281 1136694 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 15:14:46.587312 1136694 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1026 15:14:46.587397 1136694 start.go:349] cluster config:
	{Name:custom-flannel-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:14:46.588841 1136694 out.go:179] * Starting "custom-flannel-498531" primary control-plane node in "custom-flannel-498531" cluster
	I1026 15:14:46.590004 1136694 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:14:46.591088 1136694 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:14:46.592108 1136694 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:46.592144 1136694 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:14:46.592159 1136694 cache.go:58] Caching tarball of preloaded images
	I1026 15:14:46.592223 1136694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:14:46.592281 1136694 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:14:46.592294 1136694 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:14:46.592410 1136694 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/config.json ...
	I1026 15:14:46.592432 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/config.json: {Name:mk1e6ba6860d3905e9a58ab77af75d89def4aa4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:46.614428 1136694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:14:46.614450 1136694 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:14:46.614466 1136694 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:14:46.614496 1136694 start.go:360] acquireMachinesLock for custom-flannel-498531: {Name:mk935e6b1579707a1059f6202bda836a982e421d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:14:46.614588 1136694 start.go:364] duration metric: took 74.859µs to acquireMachinesLock for "custom-flannel-498531"
	I1026 15:14:46.614617 1136694 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:14:46.614682 1136694 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:14:42.464392 1131084 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:14:42.464418 1131084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1026 15:14:42.480668 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:14:43.681069 1131084 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.200351626s)
	I1026 15:14:43.681124 1131084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:14:43.681324 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:43.681650 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-498531 minikube.k8s.io/updated_at=2025_10_26T15_14_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=calico-498531 minikube.k8s.io/primary=true
	I1026 15:14:43.789179 1131084 ops.go:34] apiserver oom_adj: -16
	I1026 15:14:43.789455 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:44.290293 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:44.789371 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:45.290119 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:45.789668 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:46.289380 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:46.789908 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:47.289702 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:47.377479 1131084 kubeadm.go:1113] duration metric: took 3.696216416s to wait for elevateKubeSystemPrivileges
	I1026 15:14:47.377525 1131084 kubeadm.go:402] duration metric: took 14.298325959s to StartCluster
	I1026 15:14:47.377546 1131084 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:47.377626 1131084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:14:47.379385 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:47.379696 1131084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:14:47.379686 1131084 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:14:47.379797 1131084 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:14:47.379904 1131084 config.go:182] Loaded profile config "calico-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:47.379922 1131084 addons.go:69] Setting storage-provisioner=true in profile "calico-498531"
	I1026 15:14:47.379957 1131084 addons.go:238] Setting addon storage-provisioner=true in "calico-498531"
	I1026 15:14:47.379951 1131084 addons.go:69] Setting default-storageclass=true in profile "calico-498531"
	I1026 15:14:47.379997 1131084 host.go:66] Checking if "calico-498531" exists ...
	I1026 15:14:47.380000 1131084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-498531"
	I1026 15:14:47.380612 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:47.380709 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:47.381601 1131084 out.go:179] * Verifying Kubernetes components...
	I1026 15:14:47.383150 1131084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:47.411145 1131084 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:14:47.411280 1131084 addons.go:238] Setting addon default-storageclass=true in "calico-498531"
	I1026 15:14:47.411347 1131084 host.go:66] Checking if "calico-498531" exists ...
	I1026 15:14:47.411874 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:47.414618 1131084 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:14:47.414642 1131084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:14:47.414728 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:47.447334 1131084 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:14:47.447363 1131084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:14:47.447441 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:47.449391 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:47.476468 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:47.531624 1131084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:14:47.551697 1131084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:14:47.615023 1131084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:14:47.631258 1131084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:14:47.768598 1131084 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1026 15:14:47.770046 1131084 node_ready.go:35] waiting up to 15m0s for node "calico-498531" to be "Ready" ...
	I1026 15:14:47.980329 1131084 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:14:46.616581 1136694 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:14:46.616854 1136694 start.go:159] libmachine.API.Create for "custom-flannel-498531" (driver="docker")
	I1026 15:14:46.616883 1136694 client.go:168] LocalClient.Create starting
	I1026 15:14:46.616939 1136694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:14:46.616973 1136694 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:46.616988 1136694 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:46.617046 1136694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:14:46.617066 1136694 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:46.617075 1136694 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:46.617422 1136694 cli_runner.go:164] Run: docker network inspect custom-flannel-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:14:46.635057 1136694 cli_runner.go:211] docker network inspect custom-flannel-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:14:46.635136 1136694 network_create.go:284] running [docker network inspect custom-flannel-498531] to gather additional debugging logs...
	I1026 15:14:46.635157 1136694 cli_runner.go:164] Run: docker network inspect custom-flannel-498531
	W1026 15:14:46.653131 1136694 cli_runner.go:211] docker network inspect custom-flannel-498531 returned with exit code 1
	I1026 15:14:46.653182 1136694 network_create.go:287] error running [docker network inspect custom-flannel-498531]: docker network inspect custom-flannel-498531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-498531 not found
	I1026 15:14:46.653202 1136694 network_create.go:289] output of [docker network inspect custom-flannel-498531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-498531 not found
	
	** /stderr **
	I1026 15:14:46.653378 1136694 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:14:46.671316 1136694 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:14:46.672082 1136694 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:14:46.672838 1136694 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:14:46.673678 1136694 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001debc50}
	I1026 15:14:46.673705 1136694 network_create.go:124] attempt to create docker network custom-flannel-498531 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 15:14:46.673770 1136694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-498531 custom-flannel-498531
	I1026 15:14:46.733380 1136694 network_create.go:108] docker network custom-flannel-498531 192.168.76.0/24 created
	I1026 15:14:46.733412 1136694 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-498531" container
	I1026 15:14:46.733481 1136694 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:14:46.752603 1136694 cli_runner.go:164] Run: docker volume create custom-flannel-498531 --label name.minikube.sigs.k8s.io=custom-flannel-498531 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:14:46.771084 1136694 oci.go:103] Successfully created a docker volume custom-flannel-498531
	I1026 15:14:46.771229 1136694 cli_runner.go:164] Run: docker run --rm --name custom-flannel-498531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-498531 --entrypoint /usr/bin/test -v custom-flannel-498531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:14:47.187227 1136694 oci.go:107] Successfully prepared a docker volume custom-flannel-498531
	I1026 15:14:47.187269 1136694 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:47.187289 1136694 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:14:47.187349 1136694 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1026 15:14:48.084996 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:50.084591 1123102 pod_ready.go:94] pod "coredns-66bc5c9577-shw6l" is "Ready"
	I1026 15:14:50.084625 1123102 pod_ready.go:86] duration metric: took 37.006190261s for pod "coredns-66bc5c9577-shw6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.087960 1123102 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.093157 1123102 pod_ready.go:94] pod "etcd-default-k8s-diff-port-790012" is "Ready"
	I1026 15:14:50.093223 1123102 pod_ready.go:86] duration metric: took 5.237ms for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.095812 1123102 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.100646 1123102 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-790012" is "Ready"
	I1026 15:14:50.100678 1123102 pod_ready.go:86] duration metric: took 4.841033ms for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.103009 1123102 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.281974 1123102 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-790012" is "Ready"
	I1026 15:14:50.282007 1123102 pod_ready.go:86] duration metric: took 178.973035ms for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.482366 1123102 pod_ready.go:83] waiting for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.881696 1123102 pod_ready.go:94] pod "kube-proxy-wk2nn" is "Ready"
	I1026 15:14:50.881732 1123102 pod_ready.go:86] duration metric: took 399.339489ms for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:51.082131 1123102 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:51.481537 1123102 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-790012" is "Ready"
	I1026 15:14:51.481566 1123102 pod_ready.go:86] duration metric: took 399.410322ms for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:51.481578 1123102 pod_ready.go:40] duration metric: took 38.407876759s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:51.527437 1123102 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:14:51.603546 1123102 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-790012" cluster and "default" namespace by default
	I1026 15:14:47.981580 1131084 addons.go:514] duration metric: took 601.789415ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:14:48.273280 1131084 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-498531" context rescaled to 1 replicas
	W1026 15:14:49.774364 1131084 node_ready.go:57] node "calico-498531" has "Ready":"False" status (will retry)
	W1026 15:14:51.929371 1131084 node_ready.go:57] node "calico-498531" has "Ready":"False" status (will retry)
	I1026 15:14:52.757629 1136694 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.570222652s)
	I1026 15:14:52.757662 1136694 kic.go:203] duration metric: took 5.570370343s to extract preloaded images to volume ...
	W1026 15:14:52.757796 1136694 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:14:52.757832 1136694 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:14:52.757878 1136694 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:14:52.822780 1136694 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-498531 --name custom-flannel-498531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-498531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-498531 --network custom-flannel-498531 --ip 192.168.76.2 --volume custom-flannel-498531:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:14:53.324119 1136694 cli_runner.go:164] Run: docker container inspect custom-flannel-498531 --format={{.State.Running}}
	I1026 15:14:53.347758 1136694 cli_runner.go:164] Run: docker container inspect custom-flannel-498531 --format={{.State.Status}}
	I1026 15:14:53.370970 1136694 cli_runner.go:164] Run: docker exec custom-flannel-498531 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:14:53.421677 1136694 oci.go:144] the created container "custom-flannel-498531" has a running status.
	I1026 15:14:53.421723 1136694 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa...
	I1026 15:14:53.562117 1136694 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:14:53.594855 1136694 cli_runner.go:164] Run: docker container inspect custom-flannel-498531 --format={{.State.Status}}
	I1026 15:14:53.614861 1136694 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:14:53.614884 1136694 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-498531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:14:53.671462 1136694 cli_runner.go:164] Run: docker container inspect custom-flannel-498531 --format={{.State.Status}}
	I1026 15:14:53.697370 1136694 machine.go:93] provisionDockerMachine start ...
	I1026 15:14:53.697495 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:53.725922 1136694 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:53.726305 1136694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33887 <nil> <nil>}
	I1026 15:14:53.726330 1136694 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:14:53.884266 1136694 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-498531
	
	I1026 15:14:53.884302 1136694 ubuntu.go:182] provisioning hostname "custom-flannel-498531"
	I1026 15:14:53.884377 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:53.906925 1136694 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:53.907359 1136694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33887 <nil> <nil>}
	I1026 15:14:53.907389 1136694 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-498531 && echo "custom-flannel-498531" | sudo tee /etc/hostname
	I1026 15:14:54.071145 1136694 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-498531
	
	I1026 15:14:54.071242 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:54.095316 1136694 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:54.095631 1136694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33887 <nil> <nil>}
	I1026 15:14:54.095672 1136694 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-498531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-498531/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-498531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:14:54.244138 1136694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:14:54.244183 1136694 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:14:54.244212 1136694 ubuntu.go:190] setting up certificates
	I1026 15:14:54.244224 1136694 provision.go:84] configureAuth start
	I1026 15:14:54.244278 1136694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-498531
	I1026 15:14:54.262927 1136694 provision.go:143] copyHostCerts
	I1026 15:14:54.263006 1136694 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:14:54.263033 1136694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:14:54.263114 1136694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:14:54.263283 1136694 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:14:54.263301 1136694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:14:54.263349 1136694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:14:54.263620 1136694 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:14:54.263644 1136694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:14:54.263692 1136694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:14:54.263830 1136694 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-498531 san=[127.0.0.1 192.168.76.2 custom-flannel-498531 localhost minikube]
	I1026 15:14:54.382550 1136694 provision.go:177] copyRemoteCerts
	I1026 15:14:54.382622 1136694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:14:54.382661 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:54.405322 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:54.515483 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:14:54.538864 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:14:54.560963 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1026 15:14:54.585769 1136694 provision.go:87] duration metric: took 341.526474ms to configureAuth
	I1026 15:14:54.585821 1136694 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:14:54.586033 1136694 config.go:182] Loaded profile config "custom-flannel-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:54.586209 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:54.609410 1136694 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:54.609666 1136694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33887 <nil> <nil>}
	I1026 15:14:54.609692 1136694 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:14:54.921424 1136694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:14:54.921459 1136694 machine.go:96] duration metric: took 1.224062657s to provisionDockerMachine
	I1026 15:14:54.921471 1136694 client.go:171] duration metric: took 8.304579996s to LocalClient.Create
	I1026 15:14:54.921492 1136694 start.go:167] duration metric: took 8.30463816s to libmachine.API.Create "custom-flannel-498531"
	I1026 15:14:54.921505 1136694 start.go:293] postStartSetup for "custom-flannel-498531" (driver="docker")
	I1026 15:14:54.921519 1136694 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:14:54.921577 1136694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:14:54.921613 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:54.945141 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:55.058731 1136694 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:14:55.063418 1136694 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:14:55.063450 1136694 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:14:55.063463 1136694 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:14:55.063525 1136694 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:14:55.063658 1136694 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:14:55.063795 1136694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:14:55.074456 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:14:55.101315 1136694 start.go:296] duration metric: took 179.790139ms for postStartSetup
	I1026 15:14:55.101749 1136694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-498531
	I1026 15:14:55.124625 1136694 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/config.json ...
	I1026 15:14:55.124957 1136694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:14:55.125006 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:55.147347 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:55.254075 1136694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:14:55.260969 1136694 start.go:128] duration metric: took 8.64626965s to createHost
	I1026 15:14:55.260999 1136694 start.go:83] releasing machines lock for "custom-flannel-498531", held for 8.64639809s
	I1026 15:14:55.261086 1136694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-498531
	I1026 15:14:55.286335 1136694 ssh_runner.go:195] Run: cat /version.json
	I1026 15:14:55.286396 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:55.286542 1136694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:14:55.286622 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:55.310192 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:55.310484 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:55.497839 1136694 ssh_runner.go:195] Run: systemctl --version
	I1026 15:14:55.506975 1136694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:14:55.553240 1136694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:14:55.559856 1136694 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:14:55.559943 1136694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:14:55.595684 1136694 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:14:55.595710 1136694 start.go:495] detecting cgroup driver to use...
	I1026 15:14:55.595748 1136694 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:14:55.595850 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:14:55.618245 1136694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:14:55.635705 1136694 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:14:55.635790 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:14:55.657434 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:14:55.683572 1136694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:14:55.809753 1136694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:14:55.933053 1136694 docker.go:234] disabling docker service ...
	I1026 15:14:55.933129 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:14:55.960382 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:14:55.977851 1136694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:14:56.095589 1136694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:14:56.224050 1136694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:14:56.242422 1136694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:14:56.262587 1136694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:14:56.262651 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.276361 1136694 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:14:56.276438 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.288661 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.301313 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.313188 1136694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:14:56.324668 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.336380 1136694 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.355220 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.367422 1136694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:14:56.378382 1136694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:14:56.388680 1136694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:53.774363 1131084 node_ready.go:49] node "calico-498531" is "Ready"
	I1026 15:14:53.774412 1131084 node_ready.go:38] duration metric: took 6.00432831s for node "calico-498531" to be "Ready" ...
	I1026 15:14:53.774432 1131084 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:14:53.774491 1131084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:14:53.790269 1131084 api_server.go:72] duration metric: took 6.410526773s to wait for apiserver process to appear ...
	I1026 15:14:53.790305 1131084 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:14:53.790332 1131084 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1026 15:14:53.795724 1131084 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1026 15:14:53.796848 1131084 api_server.go:141] control plane version: v1.34.1
	I1026 15:14:53.796876 1131084 api_server.go:131] duration metric: took 6.56385ms to wait for apiserver health ...
	I1026 15:14:53.796886 1131084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:14:53.804881 1131084 system_pods.go:59] 9 kube-system pods found
	I1026 15:14:53.804918 1131084 system_pods.go:61] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:53.804929 1131084 system_pods.go:61] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:53.804937 1131084 system_pods.go:61] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:53.804943 1131084 system_pods.go:61] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:53.804947 1131084 system_pods.go:61] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:53.804952 1131084 system_pods.go:61] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:53.804957 1131084 system_pods.go:61] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:53.804960 1131084 system_pods.go:61] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:53.804964 1131084 system_pods.go:61] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:53.804971 1131084 system_pods.go:74] duration metric: took 8.078637ms to wait for pod list to return data ...
	I1026 15:14:53.804980 1131084 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:14:53.808249 1131084 default_sa.go:45] found service account: "default"
	I1026 15:14:53.808283 1131084 default_sa.go:55] duration metric: took 3.287999ms for default service account to be created ...
	I1026 15:14:53.808295 1131084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:14:53.811731 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:53.811768 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:53.811778 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:53.811785 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:53.811791 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:53.811798 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:53.811804 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:53.811809 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:53.811814 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:53.811821 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:53.811855 1131084 retry.go:31] will retry after 197.708289ms: missing components: kube-dns
	I1026 15:14:54.015045 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:54.015103 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:54.015117 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:54.015127 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:54.015136 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:54.015144 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:54.015151 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:54.015157 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:54.015176 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:54.015183 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:54.015205 1131084 retry.go:31] will retry after 253.035559ms: missing components: kube-dns
	I1026 15:14:54.272631 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:54.272666 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:54.272680 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:54.272693 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:54.272706 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:54.272714 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:54.272720 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:54.272731 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:54.272737 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:54.272746 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:54.272764 1131084 retry.go:31] will retry after 350.288095ms: missing components: kube-dns
	I1026 15:14:54.627566 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:54.627606 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:54.627618 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:54.627627 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:54.627636 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:54.627643 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:54.627651 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:54.627660 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:54.627665 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:54.627672 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:54.627696 1131084 retry.go:31] will retry after 380.19977ms: missing components: kube-dns
	I1026 15:14:55.012277 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:55.012321 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:55.012337 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:55.012348 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:55.012356 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:55.012369 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:55.012375 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:55.012383 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:55.012388 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:55.012394 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:55.012420 1131084 retry.go:31] will retry after 552.616674ms: missing components: kube-dns
	I1026 15:14:55.570586 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:55.570638 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:55.570657 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:55.570668 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:55.570677 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:55.570684 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:55.570694 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:55.570702 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:55.570726 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:55.570735 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:55.570758 1131084 retry.go:31] will retry after 595.931881ms: missing components: kube-dns
	I1026 15:14:56.173558 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:56.173590 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:56.173602 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:56.173609 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:56.173615 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:56.173620 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:56.173626 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:56.173630 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:56.173633 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:56.173636 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:56.173653 1131084 retry.go:31] will retry after 1.138105559s: missing components: kube-dns
	I1026 15:14:56.500281 1136694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:14:56.976059 1136694 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:14:56.976149 1136694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:14:56.982233 1136694 start.go:563] Will wait 60s for crictl version
	I1026 15:14:56.982305 1136694 ssh_runner.go:195] Run: which crictl
	I1026 15:14:56.987782 1136694 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:14:57.019400 1136694 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:14:57.019478 1136694 ssh_runner.go:195] Run: crio --version
	I1026 15:14:57.058761 1136694 ssh_runner.go:195] Run: crio --version
	I1026 15:14:57.097571 1136694 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:14:57.098970 1136694 cli_runner.go:164] Run: docker network inspect custom-flannel-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:14:57.121285 1136694 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:14:57.126902 1136694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:14:57.141232 1136694 kubeadm.go:883] updating cluster {Name:custom-flannel-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:14:57.141418 1136694 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:57.141487 1136694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:14:57.185708 1136694 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:14:57.185736 1136694 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:14:57.185792 1136694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:14:57.219741 1136694 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:14:57.219764 1136694 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:14:57.219773 1136694 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:14:57.219928 1136694 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-498531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1026 15:14:57.220020 1136694 ssh_runner.go:195] Run: crio config
	I1026 15:14:57.287386 1136694 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 15:14:57.287440 1136694 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:14:57.287473 1136694 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-498531 NodeName:custom-flannel-498531 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:14:57.287673 1136694 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-498531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:14:57.287761 1136694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:14:57.299620 1136694 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:14:57.299700 1136694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:14:57.309521 1136694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1026 15:14:57.327482 1136694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:14:57.349096 1136694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1026 15:14:57.367341 1136694 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:14:57.372149 1136694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:14:57.385408 1136694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:57.506841 1136694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:14:57.538470 1136694 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531 for IP: 192.168.76.2
	I1026 15:14:57.538499 1136694 certs.go:195] generating shared ca certs ...
	I1026 15:14:57.538521 1136694 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:57.538947 1136694 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:14:57.539018 1136694 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:14:57.539035 1136694 certs.go:257] generating profile certs ...
	I1026 15:14:57.539110 1136694 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.key
	I1026 15:14:57.539136 1136694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.crt with IP's: []
	I1026 15:14:57.852342 1136694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.crt ...
	I1026 15:14:57.852377 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.crt: {Name:mkf162f0a6e6d1d4549566eb1d6d1dfa27c3abaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:57.852577 1136694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.key ...
	I1026 15:14:57.852596 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.key: {Name:mk06b39f95a3cbe2cc52c0719a837a9916d83c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:57.852711 1136694 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key.805c0865
	I1026 15:14:57.852727 1136694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt.805c0865 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 15:14:58.003410 1136694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt.805c0865 ...
	I1026 15:14:58.003440 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt.805c0865: {Name:mk4565bb8ce08333692dd2592db70c2917edc3ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:58.003664 1136694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key.805c0865 ...
	I1026 15:14:58.003690 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key.805c0865: {Name:mkf5f2c5af49d0f3a57185e5f5906604daa129b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:58.003829 1136694 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt.805c0865 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt
	I1026 15:14:58.003951 1136694 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key.805c0865 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key
	I1026 15:14:58.004049 1136694 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.key
	I1026 15:14:58.004073 1136694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.crt with IP's: []
	I1026 15:14:58.280120 1136694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.crt ...
	I1026 15:14:58.280149 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.crt: {Name:mk23a2d95c2d377b35426f1f4ffa697933107a56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:58.280334 1136694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.key ...
	I1026 15:14:58.280358 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.key: {Name:mk423f4c3c213d0107bdffa2b5ff01cb2c0371e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:58.280617 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:14:58.280656 1136694 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:14:58.280666 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:14:58.280692 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:14:58.280725 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:14:58.280758 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:14:58.280820 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:14:58.281697 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:14:58.309083 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:14:58.335131 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:14:58.365291 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:14:58.392506 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 15:14:58.421409 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:14:58.445718 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:14:58.475411 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:14:58.501425 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:14:58.529743 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:14:58.553746 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:14:58.578120 1136694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:14:58.595811 1136694 ssh_runner.go:195] Run: openssl version
	I1026 15:14:58.604311 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:14:58.616516 1136694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:14:58.621628 1136694 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:14:58.621697 1136694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:14:58.678756 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:14:58.736817 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:14:58.747916 1136694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:58.753440 1136694 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:58.753506 1136694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:58.809269 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:14:58.822505 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:14:58.835046 1136694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:14:58.841222 1136694 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:14:58.841295 1136694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:14:58.899735 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:14:58.914533 1136694 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:14:58.920480 1136694 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:14:58.920536 1136694 kubeadm.go:400] StartCluster: {Name:custom-flannel-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:14:58.920632 1136694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:14:58.920690 1136694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:14:58.955737 1136694 cri.go:89] found id: ""
	I1026 15:14:58.955832 1136694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:14:58.964936 1136694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:14:58.974974 1136694 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:14:58.975030 1136694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:14:58.984671 1136694 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:14:58.984690 1136694 kubeadm.go:157] found existing configuration files:
	
	I1026 15:14:58.984738 1136694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:14:58.993947 1136694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:14:58.994022 1136694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:14:59.001940 1136694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:14:59.010070 1136694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:14:59.010118 1136694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:14:59.017831 1136694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:14:59.026258 1136694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:14:59.026323 1136694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:14:59.034610 1136694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:14:59.043384 1136694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:14:59.043450 1136694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:14:59.051761 1136694 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:14:59.097672 1136694 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:14:59.097748 1136694 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:14:59.122290 1136694 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:14:59.122396 1136694 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:14:59.122452 1136694 kubeadm.go:318] OS: Linux
	I1026 15:14:59.122551 1136694 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:14:59.122651 1136694 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:14:59.122726 1136694 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:14:59.122833 1136694 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:14:59.122901 1136694 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:14:59.122990 1136694 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:14:59.123081 1136694 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:14:59.123146 1136694 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:14:59.188421 1136694 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:14:59.188558 1136694 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:14:59.188685 1136694 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:14:59.199526 1136694 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:14:59.288433 1136694 out.go:252]   - Generating certificates and keys ...
	I1026 15:14:59.288607 1136694 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:14:59.288709 1136694 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:14:59.579438 1136694 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:14:59.788772 1136694 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:15:00.087854 1136694 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:15:00.898819 1136694 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:15:01.342723 1136694 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:15:01.343706 1136694 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-498531 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:14:57.316754 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:57.316787 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:57.316796 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:57.316803 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:57.316808 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:14:57.316813 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:57.316817 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:57.316820 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:57.316823 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:57.316826 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:57.316846 1131084 retry.go:31] will retry after 1.30566337s: missing components: kube-dns
	I1026 15:14:58.628420 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:58.628462 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:58.628475 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:58.628486 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:58.628491 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:14:58.628499 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:58.628504 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:58.628508 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:58.628511 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:58.628514 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:58.628531 1131084 retry.go:31] will retry after 1.632507044s: missing components: kube-dns
	I1026 15:15:00.274975 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:15:00.275017 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:15:00.275033 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:15:00.275046 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:15:00.275052 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:15:00.275059 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:15:00.275064 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:15:00.275072 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:15:00.275077 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:15:00.275081 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:15:00.275101 1131084 retry.go:31] will retry after 1.463967354s: missing components: kube-dns
	I1026 15:15:01.744778 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:15:01.744819 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:15:01.744830 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:15:01.744841 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:15:01.744847 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:15:01.744854 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:15:01.744859 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:15:01.744866 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:15:01.744871 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:15:01.744878 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:15:01.744925 1131084 retry.go:31] will retry after 2.74736786s: missing components: kube-dns
	I1026 15:15:01.590933 1136694 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:15:01.591114 1136694 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-498531 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:15:01.808275 1136694 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:15:02.520741 1136694 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:15:02.725101 1136694 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:15:02.725284 1136694 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:15:02.756033 1136694 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:15:03.315066 1136694 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:15:03.768933 1136694 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:15:04.198586 1136694 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:15:04.707055 1136694 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:15:04.707751 1136694 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:15:04.713567 1136694 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:15:04.717011 1136694 out.go:252]   - Booting up control plane ...
	I1026 15:15:04.717141 1136694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:15:04.717253 1136694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:15:04.717339 1136694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:15:04.733293 1136694 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:15:04.733436 1136694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:15:04.741630 1136694 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:15:04.742121 1136694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:15:04.742204 1136694 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:15:04.856462 1136694 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:15:04.856604 1136694 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:15:06.358282 1136694 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501872091s
	I1026 15:15:06.362269 1136694 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:15:06.362384 1136694 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1026 15:15:06.363139 1136694 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:15:06.363282 1136694 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 26 15:14:22 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:22.994391493Z" level=info msg="Started container" PID=1712 containerID=cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper id=81329831-40a1-441a-83d9-9415324a76e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=34387a55654c14b7f464c1c7762f7b6d6d871d5cdc395dafc14b0ff7863efddc
	Oct 26 15:14:23 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:23.88467308Z" level=info msg="Removing container: 753d12c04fbccbccdd889b52912cb9703b66ac2088032f0111ccb5b54e922476" id=ec2888de-258a-4301-bdb1-72e50153d008 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:23 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:23.896597337Z" level=info msg="Removed container 753d12c04fbccbccdd889b52912cb9703b66ac2088032f0111ccb5b54e922476: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper" id=ec2888de-258a-4301-bdb1-72e50153d008 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.798747691Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=134c4e67-c79f-42f3-a4e9-54259d910291 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.799950919Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a80dc6b-3e48-4b0e-b71e-aff25d7a0c60 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.801333985Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper" id=2b7444ac-1e6a-4577-9674-63d7ca18507b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.801504499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.808895226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.80944952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.839404614Z" level=info msg="Created container 5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper" id=2b7444ac-1e6a-4577-9674-63d7ca18507b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.840107144Z" level=info msg="Starting container: 5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737" id=24b61505-5cc0-4da1-91a0-f6d356366d77 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.842144936Z" level=info msg="Started container" PID=1722 containerID=5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper id=24b61505-5cc0-4da1-91a0-f6d356366d77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=34387a55654c14b7f464c1c7762f7b6d6d871d5cdc395dafc14b0ff7863efddc
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.927533412Z" level=info msg="Removing container: cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390" id=576507a1-a62b-453d-9bbd-0525fd7678af name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.938305107Z" level=info msg="Removed container cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper" id=576507a1-a62b-453d-9bbd-0525fd7678af name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.93899039Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9f43c39b-21a8-4b36-bd4b-840425082972 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.940184582Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=732e950a-053a-492a-860f-dcb29824af4a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.941602049Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5fd3016f-dec9-4c07-8aad-ab005de4efab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.941721164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.948979423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.949231837Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/11946fff12ca4773a24858d9db1d5667cc25d017920fa754b18391febc453eb3/merged/etc/passwd: no such file or directory"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.949269909Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/11946fff12ca4773a24858d9db1d5667cc25d017920fa754b18391febc453eb3/merged/etc/group: no such file or directory"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.949606202Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.984373337Z" level=info msg="Created container a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1: kube-system/storage-provisioner/storage-provisioner" id=5fd3016f-dec9-4c07-8aad-ab005de4efab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.985633349Z" level=info msg="Starting container: a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1" id=fdf149f0-5a16-48b9-8a3b-d1cefa7fa19a name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.989282695Z" level=info msg="Started container" PID=1736 containerID=a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1 description=kube-system/storage-provisioner/storage-provisioner id=fdf149f0-5a16-48b9-8a3b-d1cefa7fa19a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b90970ccbc9114d9579b29545c1c86e16c47264be856f1717ae35bcd218f12b8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a7481686d0bee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   b90970ccbc911       storage-provisioner                                    kube-system
	5b66042dae93c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   34387a55654c1       dashboard-metrics-scraper-6ffb444bf9-kfgm2             kubernetes-dashboard
	f7bce916e5757       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   68cdea23a0dde       kubernetes-dashboard-855c9754f9-pj966                  kubernetes-dashboard
	42b1e3115d00b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   fa7798d815ab4       busybox                                                default
	cdcd33a110ab7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   f138c76ccf852       coredns-66bc5c9577-shw6l                               kube-system
	340c4006e10f1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   a834e70122398       kube-proxy-wk2nn                                       kube-system
	86dd13cec7ebd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   b90970ccbc911       storage-provisioner                                    kube-system
	cffe05dde621a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   dacd3b2bdc152       kindnet-7ch5r                                          kube-system
	a2d02679a51ed       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   7e344794aed06       kube-controller-manager-default-k8s-diff-port-790012   kube-system
	facf1cc394076       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   1b3392b163689       kube-apiserver-default-k8s-diff-port-790012            kube-system
	35d0d03944a78       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   31ef44d12f792       kube-scheduler-default-k8s-diff-port-790012            kube-system
	8aa809c39193f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   5115d9ab3f888       etcd-default-k8s-diff-port-790012                      kube-system
	
	
	==> coredns [cdcd33a110ab72d97c137eac4a12dab06a6293ca167a79ea2a1ec28b0b18ccdc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54479 - 8434 "HINFO IN 4125302407912520646.1330475005463334414. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.055089013s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-790012
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-790012
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=default-k8s-diff-port-790012
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_13_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:13:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-790012
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:15:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:14:42 +0000   Sun, 26 Oct 2025 15:13:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:14:42 +0000   Sun, 26 Oct 2025 15:13:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:14:42 +0000   Sun, 26 Oct 2025 15:13:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:14:42 +0000   Sun, 26 Oct 2025 15:13:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-790012
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                fc981cf4-4aaf-42bf-b320-22476764867d
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-shw6l                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-790012                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-7ch5r                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-790012             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-790012    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-wk2nn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-790012             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kfgm2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pj966                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node default-k8s-diff-port-790012 event: Registered Node default-k8s-diff-port-790012 in Controller
	  Normal  NodeReady                96s                  kubelet          Node default-k8s-diff-port-790012 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node default-k8s-diff-port-790012 event: Registered Node default-k8s-diff-port-790012 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [8aa809c39193fbb83582e34b6983bd3f1e5fe7760c1faafff728462dd1913646] <==
	{"level":"warn","ts":"2025-10-26T15:14:10.671461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.680592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.692663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.702949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.711286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.720228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.728567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.738215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.756438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.768286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.784142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.794787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.804008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.815294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.827884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.836260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.843697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.851641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.861719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.883223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.892308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.908738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.916641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.923759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.977138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39902","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:15:07 up  2:57,  0 user,  load average: 4.95, 3.42, 2.15
	Linux default-k8s-diff-port-790012 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cffe05dde621ab9582c7dd3cc9f6894fcec1d0b54f1ed7baf19f6154e397b609] <==
	I1026 15:14:12.466898       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:14:12.467443       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:14:12.467729       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:14:12.467802       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:14:12.467835       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:14:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:14:12.762637       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:14:12.762699       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:14:12.762719       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:14:12.766415       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:14:13.164946       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:14:13.164976       1 metrics.go:72] Registering metrics
	I1026 15:14:13.165053       1 controller.go:711] "Syncing nftables rules"
	I1026 15:14:22.763464       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:14:22.763563       1 main.go:301] handling current node
	I1026 15:14:32.763452       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:14:32.763484       1 main.go:301] handling current node
	I1026 15:14:42.763283       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:14:42.763366       1 main.go:301] handling current node
	I1026 15:14:52.763102       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:14:52.763143       1 main.go:301] handling current node
	I1026 15:15:02.763427       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:15:02.763486       1 main.go:301] handling current node
	
	
	==> kube-apiserver [facf1cc394076aaa508c872a3c8c00a3efde72f036be55b7af624017d37ce6a3] <==
	I1026 15:14:11.664878       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:14:11.674280       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:14:11.675096       1 policy_source.go:240] refreshing policies
	E1026 15:14:11.690079       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:14:11.694380       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 15:14:11.695664       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:14:11.702049       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:14:11.713156       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:14:11.714696       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:14:11.713203       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:14:11.713177       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:14:11.722901       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:14:11.730246       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:14:11.733472       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:14:11.920484       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:14:12.178095       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:14:12.279999       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:14:12.320796       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:14:12.335264       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:14:12.440548       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.159.132"}
	I1026 15:14:12.459981       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.149.108"}
	I1026 15:14:12.597109       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:14:15.188975       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:14:15.586956       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:14:15.690694       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a2d02679a51ed33ad3086b27a58279d82b4d1c6bd035050764df771a3b17cf2c] <==
	I1026 15:14:15.208709       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:14:15.210859       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:14:15.213378       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:14:15.218724       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:14:15.218863       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:14:15.218975       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-790012"
	I1026 15:14:15.219046       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:14:15.220067       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:14:15.220140       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:14:15.220204       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:14:15.220276       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:14:15.220283       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:14:15.224415       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 15:14:15.226997       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:14:15.229653       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:14:15.231807       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:14:15.234528       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:14:15.234556       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:14:15.234626       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:14:15.234649       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 15:14:15.234652       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:14:15.234639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:14:15.234679       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:14:15.234687       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:14:15.261752       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [340c4006e10f18fc87ad00cf77d818fadf1aab8a4c9b92d33498730d7f4e711d] <==
	I1026 15:14:12.375660       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:14:12.453803       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:14:12.554825       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:14:12.554884       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:14:12.555065       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:14:12.584326       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:14:12.584382       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:14:12.590845       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:14:12.591413       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:14:12.591803       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:14:12.594081       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:14:12.594151       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:14:12.594303       1 config.go:200] "Starting service config controller"
	I1026 15:14:12.594756       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:14:12.594357       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:14:12.594852       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:14:12.594527       1 config.go:309] "Starting node config controller"
	I1026 15:14:12.594911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:14:12.594936       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:14:12.694581       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:14:12.695736       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:14:12.695802       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [35d0d03944a78ecf21c8c3291224fdd9f405cd21a6e29cd4d3096bc1744575bb] <==
	I1026 15:14:10.609715       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:14:11.594943       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:14:11.594997       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:14:11.595010       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:14:11.595019       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:14:11.672137       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:14:11.672893       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:14:11.684217       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:14:11.684402       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:14:11.687605       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:14:11.687708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:14:11.785959       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:14:15 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:15.941241     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bdfb4cb1-9363-4e8a-8424-ffd6e9068e49-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kfgm2\" (UID: \"bdfb4cb1-9363-4e8a-8424-ffd6e9068e49\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2"
	Oct 26 15:14:15 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:15.941279     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3c881e80-fe95-4d71-aff2-be956290436b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-pj966\" (UID: \"3c881e80-fe95-4d71-aff2-be956290436b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pj966"
	Oct 26 15:14:19 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:19.866810     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:14:19 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:19.896474     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pj966" podStartSLOduration=1.311487641 podStartE2EDuration="4.896443975s" podCreationTimestamp="2025-10-26 15:14:15 +0000 UTC" firstStartedPulling="2025-10-26 15:14:16.14984168 +0000 UTC m=+7.460697321" lastFinishedPulling="2025-10-26 15:14:19.734798017 +0000 UTC m=+11.045653655" observedRunningTime="2025-10-26 15:14:19.881423471 +0000 UTC m=+11.192279118" watchObservedRunningTime="2025-10-26 15:14:19.896443975 +0000 UTC m=+11.207299624"
	Oct 26 15:14:22 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:22.878149     721 scope.go:117] "RemoveContainer" containerID="753d12c04fbccbccdd889b52912cb9703b66ac2088032f0111ccb5b54e922476"
	Oct 26 15:14:23 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:23.882879     721 scope.go:117] "RemoveContainer" containerID="753d12c04fbccbccdd889b52912cb9703b66ac2088032f0111ccb5b54e922476"
	Oct 26 15:14:23 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:23.883064     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:23 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:23.883300     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:24 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:24.888549     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:24 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:24.888841     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:25 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:25.890752     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:25 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:25.890964     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:39 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:39.798129     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:39 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:39.926089     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:39 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:39.926350     721 scope.go:117] "RemoveContainer" containerID="5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	Oct 26 15:14:39 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:39.926558     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:42 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:42.938497     721 scope.go:117] "RemoveContainer" containerID="86dd13cec7ebd6e740152fe44eb9f68d18517a514d6d9e9b154243c9372b9e3e"
	Oct 26 15:14:43 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:43.971489     721 scope.go:117] "RemoveContainer" containerID="5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	Oct 26 15:14:43 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:43.971704     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:58 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:58.799820     721 scope.go:117] "RemoveContainer" containerID="5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	Oct 26 15:14:58 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:58.800047     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:15:04 default-k8s-diff-port-790012 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:15:04 default-k8s-diff-port-790012 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:15:04 default-k8s-diff-port-790012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:15:04 default-k8s-diff-port-790012 systemd[1]: kubelet.service: Consumed 1.922s CPU time.
	
	
	==> kubernetes-dashboard [f7bce916e5757f41f13bbf128728404ae709bb2ac55795cf3f137d9120b46fdf] <==
	2025/10/26 15:14:19 Starting overwatch
	2025/10/26 15:14:19 Using namespace: kubernetes-dashboard
	2025/10/26 15:14:19 Using in-cluster config to connect to apiserver
	2025/10/26 15:14:19 Using secret token for csrf signing
	2025/10/26 15:14:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:14:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:14:19 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:14:19 Generating JWE encryption key
	2025/10/26 15:14:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:14:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:14:20 Initializing JWE encryption key from synchronized object
	2025/10/26 15:14:20 Creating in-cluster Sidecar client
	2025/10/26 15:14:20 Serving insecurely on HTTP port: 9090
	2025/10/26 15:14:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:14:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [86dd13cec7ebd6e740152fe44eb9f68d18517a514d6d9e9b154243c9372b9e3e] <==
	I1026 15:14:12.321631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:14:42.324724       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1] <==
	I1026 15:14:43.005203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 15:14:43.037228       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:14:43.037278       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:14:43.040295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:46.496313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:50.757358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:54.356488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:57.410719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:00.433664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:00.439226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:15:00.439420       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:15:00.439634       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-790012_99c196a5-4ea9-4bdb-b35a-422f67aaad19!
	I1026 15:15:00.439882       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0ccb008-188e-4240-a93f-ef906d571508", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-790012_99c196a5-4ea9-4bdb-b35a-422f67aaad19 became leader
	W1026 15:15:00.442472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:00.446893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:15:00.540588       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-790012_99c196a5-4ea9-4bdb-b35a-422f67aaad19!
	W1026 15:15:02.450363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:02.456440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:04.459774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:04.537214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:06.541728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:06.546879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012: exit status 2 (510.455634ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-790012 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-790012
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-790012:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a",
	        "Created": "2025-10-26T15:12:52.819696195Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1123518,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T15:14:01.79961134Z",
	            "FinishedAt": "2025-10-26T15:14:00.285285428Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/hosts",
	        "LogPath": "/var/lib/docker/containers/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a/f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a-json.log",
	        "Name": "/default-k8s-diff-port-790012",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-790012:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-790012",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f2c26d088cf784b9fa3246255055619f610c4cc9d4a3450f83c3d6e8e7c2648a",
	                "LowerDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518-init/diff:/var/lib/docker/overlay2/44fbf47b0380d8e5536fd686eddc180ae93370ed793e3b28b30bd2701cd014ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb1f825d0d0a1ba72d95cb70e9ee9f8fe5570837cf0ab7bbcdefcc67f9bd4518/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-790012",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-790012/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-790012",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-790012",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-790012",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7641d74e084bc2cb5e05c645147115df4e3228d6a080ebff9eccae99b1456abf",
	            "SandboxKey": "/var/run/docker/netns/7641d74e084b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-790012": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:03:3e:9a:18:08",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb8db690bfd734c5a8c0b627f3759fdde408bba40a95fd914967f52dd3a0e0bf",
	                    "EndpointID": "0e43cd60c16fd7e3c37003f3ad9137d27bb9c1ede1cfa30f4f7e90f7462303a4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-790012",
	                        "f2c26d088cf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012: exit status 2 (482.21964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-790012 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-790012 logs -n 25: (1.358876877s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-498531 sudo ip a s                                                   │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo ip r s                                                   │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo iptables-save                                            │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo iptables -t nat -L -n -v                                 │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo systemctl status kubelet --all --full --no-pager         │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo systemctl cat kubelet --no-pager                         │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ image   │ default-k8s-diff-port-790012 image list --format=json                           │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo journalctl -xeu kubelet --all --full --no-pager          │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ pause   │ -p default-k8s-diff-port-790012 --alsologtostderr -v=1                          │ default-k8s-diff-port-790012 │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/kubernetes/kubelet.conf                         │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /var/lib/kubelet/config.yaml                         │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo systemctl status docker --all --full --no-pager          │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo systemctl cat docker --no-pager                          │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/docker/daemon.json                              │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo docker system info                                       │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo systemctl status cri-docker --all --full --no-pager      │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo systemctl cat cri-docker --no-pager                      │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo cat /usr/lib/systemd/system/cri-docker.service           │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo cri-dockerd --version                                    │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo systemctl status containerd --all --full --no-pager      │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	│ ssh     │ -p kindnet-498531 sudo systemctl cat containerd --no-pager                      │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /lib/systemd/system/containerd.service               │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo cat /etc/containerd/config.toml                          │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │ 26 Oct 25 15:15 UTC │
	│ ssh     │ -p kindnet-498531 sudo containerd config dump                                   │ kindnet-498531               │ jenkins │ v1.37.0 │ 26 Oct 25 15:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:14:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:14:46.418049 1136694 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:14:46.418363 1136694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:46.418372 1136694 out.go:374] Setting ErrFile to fd 2...
	I1026 15:14:46.418376 1136694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:14:46.418596 1136694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:14:46.419084 1136694 out.go:368] Setting JSON to false
	I1026 15:14:46.420367 1136694 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10634,"bootTime":1761481052,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:14:46.420482 1136694 start.go:141] virtualization: kvm guest
	I1026 15:14:46.422589 1136694 out.go:179] * [custom-flannel-498531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:14:46.423907 1136694 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:14:46.423917 1136694 notify.go:220] Checking for updates...
	I1026 15:14:46.426265 1136694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:14:46.427557 1136694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:14:46.428739 1136694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:14:46.430067 1136694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:14:46.431375 1136694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:14:46.433224 1136694 config.go:182] Loaded profile config "calico-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:46.433327 1136694 config.go:182] Loaded profile config "default-k8s-diff-port-790012": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:46.433394 1136694 config.go:182] Loaded profile config "kindnet-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:46.433489 1136694 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:14:46.458059 1136694 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:14:46.458190 1136694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:46.521520 1136694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:14:46.508722717 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:14:46.521645 1136694 docker.go:318] overlay module found
	I1026 15:14:46.523535 1136694 out.go:179] * Using the docker driver based on user configuration
	W1026 15:14:41.584618 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:43.586949 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	W1026 15:14:46.084436 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:46.524856 1136694 start.go:305] selected driver: docker
	I1026 15:14:46.524873 1136694 start.go:925] validating driver "docker" against <nil>
	I1026 15:14:46.524885 1136694 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:14:46.525533 1136694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:14:46.583777 1136694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 15:14:46.572424831 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:14:46.583986 1136694 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:14:46.584343 1136694 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:14:46.586150 1136694 out.go:179] * Using Docker driver with root privileges
	I1026 15:14:46.587281 1136694 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 15:14:46.587312 1136694 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1026 15:14:46.587397 1136694 start.go:349] cluster config:
	{Name:custom-flannel-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:14:46.588841 1136694 out.go:179] * Starting "custom-flannel-498531" primary control-plane node in "custom-flannel-498531" cluster
	I1026 15:14:46.590004 1136694 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 15:14:46.591088 1136694 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 15:14:46.592108 1136694 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:46.592144 1136694 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:14:46.592159 1136694 cache.go:58] Caching tarball of preloaded images
	I1026 15:14:46.592223 1136694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 15:14:46.592281 1136694 preload.go:233] Found /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:14:46.592294 1136694 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:14:46.592410 1136694 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/config.json ...
	I1026 15:14:46.592432 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/config.json: {Name:mk1e6ba6860d3905e9a58ab77af75d89def4aa4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:46.614428 1136694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 15:14:46.614450 1136694 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 15:14:46.614466 1136694 cache.go:232] Successfully downloaded all kic artifacts
	I1026 15:14:46.614496 1136694 start.go:360] acquireMachinesLock for custom-flannel-498531: {Name:mk935e6b1579707a1059f6202bda836a982e421d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:14:46.614588 1136694 start.go:364] duration metric: took 74.859µs to acquireMachinesLock for "custom-flannel-498531"
	I1026 15:14:46.614617 1136694 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:14:46.614682 1136694 start.go:125] createHost starting for "" (driver="docker")
	I1026 15:14:42.464392 1131084 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 15:14:42.464418 1131084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1026 15:14:42.480668 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 15:14:43.681069 1131084 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.200351626s)
	I1026 15:14:43.681124 1131084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:14:43.681324 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:43.681650 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-498531 minikube.k8s.io/updated_at=2025_10_26T15_14_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=calico-498531 minikube.k8s.io/primary=true
	I1026 15:14:43.789179 1131084 ops.go:34] apiserver oom_adj: -16
	I1026 15:14:43.789455 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:44.290293 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:44.789371 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:45.290119 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:45.789668 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:46.289380 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:46.789908 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:47.289702 1131084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:14:47.377479 1131084 kubeadm.go:1113] duration metric: took 3.696216416s to wait for elevateKubeSystemPrivileges
	I1026 15:14:47.377525 1131084 kubeadm.go:402] duration metric: took 14.298325959s to StartCluster
	I1026 15:14:47.377546 1131084 settings.go:142] acquiring lock: {Name:mkab79daecf1fab35293493e1e2484069a81f3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:47.377626 1131084 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:14:47.379385 1131084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/kubeconfig: {Name:mkd2ffb9d038711ee964ad156ae5b46dacacd9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:47.379696 1131084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:14:47.379686 1131084 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:14:47.379797 1131084 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:14:47.379904 1131084 config.go:182] Loaded profile config "calico-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:47.379922 1131084 addons.go:69] Setting storage-provisioner=true in profile "calico-498531"
	I1026 15:14:47.379957 1131084 addons.go:238] Setting addon storage-provisioner=true in "calico-498531"
	I1026 15:14:47.379951 1131084 addons.go:69] Setting default-storageclass=true in profile "calico-498531"
	I1026 15:14:47.379997 1131084 host.go:66] Checking if "calico-498531" exists ...
	I1026 15:14:47.380000 1131084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-498531"
	I1026 15:14:47.380612 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:47.380709 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:47.381601 1131084 out.go:179] * Verifying Kubernetes components...
	I1026 15:14:47.383150 1131084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:47.411145 1131084 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:14:47.411280 1131084 addons.go:238] Setting addon default-storageclass=true in "calico-498531"
	I1026 15:14:47.411347 1131084 host.go:66] Checking if "calico-498531" exists ...
	I1026 15:14:47.411874 1131084 cli_runner.go:164] Run: docker container inspect calico-498531 --format={{.State.Status}}
	I1026 15:14:47.414618 1131084 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:14:47.414642 1131084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:14:47.414728 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:47.447334 1131084 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:14:47.447363 1131084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:14:47.447441 1131084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-498531
	I1026 15:14:47.449391 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:47.476468 1131084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33882 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/calico-498531/id_rsa Username:docker}
	I1026 15:14:47.531624 1131084 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:14:47.551697 1131084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:14:47.615023 1131084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:14:47.631258 1131084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:14:47.768598 1131084 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1026 15:14:47.770046 1131084 node_ready.go:35] waiting up to 15m0s for node "calico-498531" to be "Ready" ...
	I1026 15:14:47.980329 1131084 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:14:46.616581 1136694 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 15:14:46.616854 1136694 start.go:159] libmachine.API.Create for "custom-flannel-498531" (driver="docker")
	I1026 15:14:46.616883 1136694 client.go:168] LocalClient.Create starting
	I1026 15:14:46.616939 1136694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem
	I1026 15:14:46.616973 1136694 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:46.616988 1136694 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:46.617046 1136694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem
	I1026 15:14:46.617066 1136694 main.go:141] libmachine: Decoding PEM data...
	I1026 15:14:46.617075 1136694 main.go:141] libmachine: Parsing certificate...
	I1026 15:14:46.617422 1136694 cli_runner.go:164] Run: docker network inspect custom-flannel-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 15:14:46.635057 1136694 cli_runner.go:211] docker network inspect custom-flannel-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 15:14:46.635136 1136694 network_create.go:284] running [docker network inspect custom-flannel-498531] to gather additional debugging logs...
	I1026 15:14:46.635157 1136694 cli_runner.go:164] Run: docker network inspect custom-flannel-498531
	W1026 15:14:46.653131 1136694 cli_runner.go:211] docker network inspect custom-flannel-498531 returned with exit code 1
	I1026 15:14:46.653182 1136694 network_create.go:287] error running [docker network inspect custom-flannel-498531]: docker network inspect custom-flannel-498531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-498531 not found
	I1026 15:14:46.653202 1136694 network_create.go:289] output of [docker network inspect custom-flannel-498531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-498531 not found
	
	** /stderr **
	I1026 15:14:46.653378 1136694 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:14:46.671316 1136694 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
	I1026 15:14:46.672082 1136694 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-788b1aa150f9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d6:3d:9b:f7:9b:2d} reservation:<nil>}
	I1026 15:14:46.672838 1136694 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3ea0f8afe5af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:d6:81:f4:17:77:eb} reservation:<nil>}
	I1026 15:14:46.673678 1136694 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001debc50}
	I1026 15:14:46.673705 1136694 network_create.go:124] attempt to create docker network custom-flannel-498531 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 15:14:46.673770 1136694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-498531 custom-flannel-498531
	I1026 15:14:46.733380 1136694 network_create.go:108] docker network custom-flannel-498531 192.168.76.0/24 created
	I1026 15:14:46.733412 1136694 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-498531" container
	I1026 15:14:46.733481 1136694 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 15:14:46.752603 1136694 cli_runner.go:164] Run: docker volume create custom-flannel-498531 --label name.minikube.sigs.k8s.io=custom-flannel-498531 --label created_by.minikube.sigs.k8s.io=true
	I1026 15:14:46.771084 1136694 oci.go:103] Successfully created a docker volume custom-flannel-498531
	I1026 15:14:46.771229 1136694 cli_runner.go:164] Run: docker run --rm --name custom-flannel-498531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-498531 --entrypoint /usr/bin/test -v custom-flannel-498531:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 15:14:47.187227 1136694 oci.go:107] Successfully prepared a docker volume custom-flannel-498531
	I1026 15:14:47.187269 1136694 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:47.187289 1136694 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 15:14:47.187349 1136694 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1026 15:14:48.084996 1123102 pod_ready.go:104] pod "coredns-66bc5c9577-shw6l" is not "Ready", error: <nil>
	I1026 15:14:50.084591 1123102 pod_ready.go:94] pod "coredns-66bc5c9577-shw6l" is "Ready"
	I1026 15:14:50.084625 1123102 pod_ready.go:86] duration metric: took 37.006190261s for pod "coredns-66bc5c9577-shw6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.087960 1123102 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.093157 1123102 pod_ready.go:94] pod "etcd-default-k8s-diff-port-790012" is "Ready"
	I1026 15:14:50.093223 1123102 pod_ready.go:86] duration metric: took 5.237ms for pod "etcd-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.095812 1123102 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.100646 1123102 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-790012" is "Ready"
	I1026 15:14:50.100678 1123102 pod_ready.go:86] duration metric: took 4.841033ms for pod "kube-apiserver-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.103009 1123102 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.281974 1123102 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-790012" is "Ready"
	I1026 15:14:50.282007 1123102 pod_ready.go:86] duration metric: took 178.973035ms for pod "kube-controller-manager-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.482366 1123102 pod_ready.go:83] waiting for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:50.881696 1123102 pod_ready.go:94] pod "kube-proxy-wk2nn" is "Ready"
	I1026 15:14:50.881732 1123102 pod_ready.go:86] duration metric: took 399.339489ms for pod "kube-proxy-wk2nn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:51.082131 1123102 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:51.481537 1123102 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-790012" is "Ready"
	I1026 15:14:51.481566 1123102 pod_ready.go:86] duration metric: took 399.410322ms for pod "kube-scheduler-default-k8s-diff-port-790012" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:14:51.481578 1123102 pod_ready.go:40] duration metric: took 38.407876759s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:14:51.527437 1123102 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:14:51.603546 1123102 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-790012" cluster and "default" namespace by default
	I1026 15:14:47.981580 1131084 addons.go:514] duration metric: took 601.789415ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:14:48.273280 1131084 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-498531" context rescaled to 1 replicas
	W1026 15:14:49.774364 1131084 node_ready.go:57] node "calico-498531" has "Ready":"False" status (will retry)
	W1026 15:14:51.929371 1131084 node_ready.go:57] node "calico-498531" has "Ready":"False" status (will retry)
	I1026 15:14:52.757629 1136694 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-498531:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.570222652s)
	I1026 15:14:52.757662 1136694 kic.go:203] duration metric: took 5.570370343s to extract preloaded images to volume ...
	W1026 15:14:52.757796 1136694 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 15:14:52.757832 1136694 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 15:14:52.757878 1136694 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 15:14:52.822780 1136694 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-498531 --name custom-flannel-498531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-498531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-498531 --network custom-flannel-498531 --ip 192.168.76.2 --volume custom-flannel-498531:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 15:14:53.324119 1136694 cli_runner.go:164] Run: docker container inspect custom-flannel-498531 --format={{.State.Running}}
	I1026 15:14:53.347758 1136694 cli_runner.go:164] Run: docker container inspect custom-flannel-498531 --format={{.State.Status}}
	I1026 15:14:53.370970 1136694 cli_runner.go:164] Run: docker exec custom-flannel-498531 stat /var/lib/dpkg/alternatives/iptables
	I1026 15:14:53.421677 1136694 oci.go:144] the created container "custom-flannel-498531" has a running status.
	I1026 15:14:53.421723 1136694 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa...
	I1026 15:14:53.562117 1136694 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 15:14:53.594855 1136694 cli_runner.go:164] Run: docker container inspect custom-flannel-498531 --format={{.State.Status}}
	I1026 15:14:53.614861 1136694 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 15:14:53.614884 1136694 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-498531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 15:14:53.671462 1136694 cli_runner.go:164] Run: docker container inspect custom-flannel-498531 --format={{.State.Status}}
	I1026 15:14:53.697370 1136694 machine.go:93] provisionDockerMachine start ...
	I1026 15:14:53.697495 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:53.725922 1136694 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:53.726305 1136694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33887 <nil> <nil>}
	I1026 15:14:53.726330 1136694 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:14:53.884266 1136694 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-498531
	
	I1026 15:14:53.884302 1136694 ubuntu.go:182] provisioning hostname "custom-flannel-498531"
	I1026 15:14:53.884377 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:53.906925 1136694 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:53.907359 1136694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33887 <nil> <nil>}
	I1026 15:14:53.907389 1136694 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-498531 && echo "custom-flannel-498531" | sudo tee /etc/hostname
	I1026 15:14:54.071145 1136694 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-498531
	
	I1026 15:14:54.071242 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:54.095316 1136694 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:54.095631 1136694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33887 <nil> <nil>}
	I1026 15:14:54.095672 1136694 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-498531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-498531/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-498531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:14:54.244138 1136694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:14:54.244183 1136694 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-841519/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-841519/.minikube}
	I1026 15:14:54.244212 1136694 ubuntu.go:190] setting up certificates
	I1026 15:14:54.244224 1136694 provision.go:84] configureAuth start
	I1026 15:14:54.244278 1136694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-498531
	I1026 15:14:54.262927 1136694 provision.go:143] copyHostCerts
	I1026 15:14:54.263006 1136694 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem, removing ...
	I1026 15:14:54.263033 1136694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem
	I1026 15:14:54.263114 1136694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/ca.pem (1082 bytes)
	I1026 15:14:54.263283 1136694 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem, removing ...
	I1026 15:14:54.263301 1136694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem
	I1026 15:14:54.263349 1136694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/cert.pem (1123 bytes)
	I1026 15:14:54.263620 1136694 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem, removing ...
	I1026 15:14:54.263644 1136694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem
	I1026 15:14:54.263692 1136694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-841519/.minikube/key.pem (1675 bytes)
	I1026 15:14:54.263830 1136694 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-498531 san=[127.0.0.1 192.168.76.2 custom-flannel-498531 localhost minikube]
	I1026 15:14:54.382550 1136694 provision.go:177] copyRemoteCerts
	I1026 15:14:54.382622 1136694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:14:54.382661 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:54.405322 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:54.515483 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:14:54.538864 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:14:54.560963 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1026 15:14:54.585769 1136694 provision.go:87] duration metric: took 341.526474ms to configureAuth
	I1026 15:14:54.585821 1136694 ubuntu.go:206] setting minikube options for container-runtime
	I1026 15:14:54.586033 1136694 config.go:182] Loaded profile config "custom-flannel-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:14:54.586209 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:54.609410 1136694 main.go:141] libmachine: Using SSH client type: native
	I1026 15:14:54.609666 1136694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 127.0.0.1 33887 <nil> <nil>}
	I1026 15:14:54.609692 1136694 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:14:54.921424 1136694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:14:54.921459 1136694 machine.go:96] duration metric: took 1.224062657s to provisionDockerMachine
	I1026 15:14:54.921471 1136694 client.go:171] duration metric: took 8.304579996s to LocalClient.Create
	I1026 15:14:54.921492 1136694 start.go:167] duration metric: took 8.30463816s to libmachine.API.Create "custom-flannel-498531"
	I1026 15:14:54.921505 1136694 start.go:293] postStartSetup for "custom-flannel-498531" (driver="docker")
	I1026 15:14:54.921519 1136694 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:14:54.921577 1136694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:14:54.921613 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:54.945141 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:55.058731 1136694 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:14:55.063418 1136694 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 15:14:55.063450 1136694 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 15:14:55.063463 1136694 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/addons for local assets ...
	I1026 15:14:55.063525 1136694 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-841519/.minikube/files for local assets ...
	I1026 15:14:55.063658 1136694 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem -> 8450952.pem in /etc/ssl/certs
	I1026 15:14:55.063795 1136694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:14:55.074456 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:14:55.101315 1136694 start.go:296] duration metric: took 179.790139ms for postStartSetup
	I1026 15:14:55.101749 1136694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-498531
	I1026 15:14:55.124625 1136694 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/config.json ...
	I1026 15:14:55.124957 1136694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:14:55.125006 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:55.147347 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:55.254075 1136694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 15:14:55.260969 1136694 start.go:128] duration metric: took 8.64626965s to createHost
	I1026 15:14:55.260999 1136694 start.go:83] releasing machines lock for "custom-flannel-498531", held for 8.64639809s
	I1026 15:14:55.261086 1136694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-498531
	I1026 15:14:55.286335 1136694 ssh_runner.go:195] Run: cat /version.json
	I1026 15:14:55.286396 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:55.286542 1136694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:14:55.286622 1136694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-498531
	I1026 15:14:55.310192 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:55.310484 1136694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33887 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/custom-flannel-498531/id_rsa Username:docker}
	I1026 15:14:55.497839 1136694 ssh_runner.go:195] Run: systemctl --version
	I1026 15:14:55.506975 1136694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:14:55.553240 1136694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:14:55.559856 1136694 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:14:55.559943 1136694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:14:55.595684 1136694 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:14:55.595710 1136694 start.go:495] detecting cgroup driver to use...
	I1026 15:14:55.595748 1136694 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 15:14:55.595850 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:14:55.618245 1136694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:14:55.635705 1136694 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:14:55.635790 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:14:55.657434 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:14:55.683572 1136694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:14:55.809753 1136694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:14:55.933053 1136694 docker.go:234] disabling docker service ...
	I1026 15:14:55.933129 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:14:55.960382 1136694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:14:55.977851 1136694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:14:56.095589 1136694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:14:56.224050 1136694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:14:56.242422 1136694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:14:56.262587 1136694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:14:56.262651 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.276361 1136694 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 15:14:56.276438 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.288661 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.301313 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.313188 1136694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:14:56.324668 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.336380 1136694 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.355220 1136694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:14:56.367422 1136694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:14:56.378382 1136694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:14:56.388680 1136694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:53.774363 1131084 node_ready.go:49] node "calico-498531" is "Ready"
	I1026 15:14:53.774412 1131084 node_ready.go:38] duration metric: took 6.00432831s for node "calico-498531" to be "Ready" ...
	I1026 15:14:53.774432 1131084 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:14:53.774491 1131084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:14:53.790269 1131084 api_server.go:72] duration metric: took 6.410526773s to wait for apiserver process to appear ...
	I1026 15:14:53.790305 1131084 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:14:53.790332 1131084 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1026 15:14:53.795724 1131084 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1026 15:14:53.796848 1131084 api_server.go:141] control plane version: v1.34.1
	I1026 15:14:53.796876 1131084 api_server.go:131] duration metric: took 6.56385ms to wait for apiserver health ...
	I1026 15:14:53.796886 1131084 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:14:53.804881 1131084 system_pods.go:59] 9 kube-system pods found
	I1026 15:14:53.804918 1131084 system_pods.go:61] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:53.804929 1131084 system_pods.go:61] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:53.804937 1131084 system_pods.go:61] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:53.804943 1131084 system_pods.go:61] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:53.804947 1131084 system_pods.go:61] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:53.804952 1131084 system_pods.go:61] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:53.804957 1131084 system_pods.go:61] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:53.804960 1131084 system_pods.go:61] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:53.804964 1131084 system_pods.go:61] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:53.804971 1131084 system_pods.go:74] duration metric: took 8.078637ms to wait for pod list to return data ...
	I1026 15:14:53.804980 1131084 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:14:53.808249 1131084 default_sa.go:45] found service account: "default"
	I1026 15:14:53.808283 1131084 default_sa.go:55] duration metric: took 3.287999ms for default service account to be created ...
	I1026 15:14:53.808295 1131084 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:14:53.811731 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:53.811768 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:53.811778 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:53.811785 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:53.811791 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:53.811798 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:53.811804 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:53.811809 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:53.811814 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:53.811821 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:53.811855 1131084 retry.go:31] will retry after 197.708289ms: missing components: kube-dns
	I1026 15:14:54.015045 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:54.015103 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:54.015117 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:54.015127 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:54.015136 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:54.015144 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:54.015151 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:54.015157 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:54.015176 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:54.015183 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:54.015205 1131084 retry.go:31] will retry after 253.035559ms: missing components: kube-dns
	I1026 15:14:54.272631 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:54.272666 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:54.272680 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:54.272693 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:54.272706 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:54.272714 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:54.272720 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:54.272731 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:54.272737 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:54.272746 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:54.272764 1131084 retry.go:31] will retry after 350.288095ms: missing components: kube-dns
	I1026 15:14:54.627566 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:54.627606 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:54.627618 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:54.627627 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:54.627636 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:54.627643 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:54.627651 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:54.627660 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:54.627665 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:54.627672 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:14:54.627696 1131084 retry.go:31] will retry after 380.19977ms: missing components: kube-dns
	I1026 15:14:55.012277 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:55.012321 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:55.012337 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:55.012348 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:55.012356 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:55.012369 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:55.012375 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:55.012383 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:55.012388 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:55.012394 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:55.012420 1131084 retry.go:31] will retry after 552.616674ms: missing components: kube-dns
	I1026 15:14:55.570586 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:55.570638 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:55.570657 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:55.570668 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:55.570677 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:55.570684 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:55.570694 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:55.570702 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:55.570726 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:55.570735 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:55.570758 1131084 retry.go:31] will retry after 595.931881ms: missing components: kube-dns
	I1026 15:14:56.173558 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:56.173590 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:56.173602 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:56.173609 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:56.173615 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:14:56.173620 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:56.173626 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:56.173630 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:56.173633 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:56.173636 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:56.173653 1131084 retry.go:31] will retry after 1.138105559s: missing components: kube-dns
	I1026 15:14:56.500281 1136694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:14:56.976059 1136694 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:14:56.976149 1136694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:14:56.982233 1136694 start.go:563] Will wait 60s for crictl version
	I1026 15:14:56.982305 1136694 ssh_runner.go:195] Run: which crictl
	I1026 15:14:56.987782 1136694 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 15:14:57.019400 1136694 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 15:14:57.019478 1136694 ssh_runner.go:195] Run: crio --version
	I1026 15:14:57.058761 1136694 ssh_runner.go:195] Run: crio --version
	I1026 15:14:57.097571 1136694 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 15:14:57.098970 1136694 cli_runner.go:164] Run: docker network inspect custom-flannel-498531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 15:14:57.121285 1136694 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 15:14:57.126902 1136694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:14:57.141232 1136694 kubeadm.go:883] updating cluster {Name:custom-flannel-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:14:57.141418 1136694 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:14:57.141487 1136694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:14:57.185708 1136694 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:14:57.185736 1136694 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:14:57.185792 1136694 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:14:57.219741 1136694 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:14:57.219764 1136694 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:14:57.219773 1136694 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 15:14:57.219928 1136694 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-498531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1026 15:14:57.220020 1136694 ssh_runner.go:195] Run: crio config
	I1026 15:14:57.287386 1136694 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 15:14:57.287440 1136694 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:14:57.287473 1136694 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-498531 NodeName:custom-flannel-498531 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:14:57.287673 1136694 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-498531"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:14:57.287761 1136694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:14:57.299620 1136694 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:14:57.299700 1136694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:14:57.309521 1136694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1026 15:14:57.327482 1136694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:14:57.349096 1136694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1026 15:14:57.367341 1136694 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 15:14:57.372149 1136694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:14:57.385408 1136694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:14:57.506841 1136694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:14:57.538470 1136694 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531 for IP: 192.168.76.2
	I1026 15:14:57.538499 1136694 certs.go:195] generating shared ca certs ...
	I1026 15:14:57.538521 1136694 certs.go:227] acquiring lock for ca certs: {Name:mkc310765b5f037cf348f6c57ba521193a825757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:57.538947 1136694 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key
	I1026 15:14:57.539018 1136694 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key
	I1026 15:14:57.539035 1136694 certs.go:257] generating profile certs ...
	I1026 15:14:57.539110 1136694 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.key
	I1026 15:14:57.539136 1136694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.crt with IP's: []
	I1026 15:14:57.852342 1136694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.crt ...
	I1026 15:14:57.852377 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.crt: {Name:mkf162f0a6e6d1d4549566eb1d6d1dfa27c3abaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:57.852577 1136694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.key ...
	I1026 15:14:57.852596 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/client.key: {Name:mk06b39f95a3cbe2cc52c0719a837a9916d83c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:57.852711 1136694 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key.805c0865
	I1026 15:14:57.852727 1136694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt.805c0865 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 15:14:58.003410 1136694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt.805c0865 ...
	I1026 15:14:58.003440 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt.805c0865: {Name:mk4565bb8ce08333692dd2592db70c2917edc3ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:58.003664 1136694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key.805c0865 ...
	I1026 15:14:58.003690 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key.805c0865: {Name:mkf5f2c5af49d0f3a57185e5f5906604daa129b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:58.003829 1136694 certs.go:382] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt.805c0865 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt
	I1026 15:14:58.003951 1136694 certs.go:386] copying /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key.805c0865 -> /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key
	I1026 15:14:58.004049 1136694 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.key
	I1026 15:14:58.004073 1136694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.crt with IP's: []
	I1026 15:14:58.280120 1136694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.crt ...
	I1026 15:14:58.280149 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.crt: {Name:mk23a2d95c2d377b35426f1f4ffa697933107a56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:58.280334 1136694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.key ...
	I1026 15:14:58.280358 1136694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.key: {Name:mk423f4c3c213d0107bdffa2b5ff01cb2c0371e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:14:58.280617 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem (1338 bytes)
	W1026 15:14:58.280656 1136694 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095_empty.pem, impossibly tiny 0 bytes
	I1026 15:14:58.280666 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 15:14:58.280692 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:14:58.280725 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:14:58.280758 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/certs/key.pem (1675 bytes)
	I1026 15:14:58.280820 1136694 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem (1708 bytes)
	I1026 15:14:58.281697 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:14:58.309083 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:14:58.335131 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:14:58.365291 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:14:58.392506 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 15:14:58.421409 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:14:58.445718 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:14:58.475411 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/custom-flannel-498531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:14:58.501425 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/ssl/certs/8450952.pem --> /usr/share/ca-certificates/8450952.pem (1708 bytes)
	I1026 15:14:58.529743 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:14:58.553746 1136694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-841519/.minikube/certs/845095.pem --> /usr/share/ca-certificates/845095.pem (1338 bytes)
	I1026 15:14:58.578120 1136694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:14:58.595811 1136694 ssh_runner.go:195] Run: openssl version
	I1026 15:14:58.604311 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8450952.pem && ln -fs /usr/share/ca-certificates/8450952.pem /etc/ssl/certs/8450952.pem"
	I1026 15:14:58.616516 1136694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8450952.pem
	I1026 15:14:58.621628 1136694 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:26 /usr/share/ca-certificates/8450952.pem
	I1026 15:14:58.621697 1136694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8450952.pem
	I1026 15:14:58.678756 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8450952.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:14:58.736817 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:14:58.747916 1136694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:58.753440 1136694 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:14 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:58.753506 1136694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:14:58.809269 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:14:58.822505 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/845095.pem && ln -fs /usr/share/ca-certificates/845095.pem /etc/ssl/certs/845095.pem"
	I1026 15:14:58.835046 1136694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/845095.pem
	I1026 15:14:58.841222 1136694 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:26 /usr/share/ca-certificates/845095.pem
	I1026 15:14:58.841295 1136694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/845095.pem
	I1026 15:14:58.899735 1136694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/845095.pem /etc/ssl/certs/51391683.0"
	I1026 15:14:58.914533 1136694 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:14:58.920480 1136694 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:14:58.920536 1136694 kubeadm.go:400] StartCluster: {Name:custom-flannel-498531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-498531 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:14:58.920632 1136694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:14:58.920690 1136694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:14:58.955737 1136694 cri.go:89] found id: ""
	I1026 15:14:58.955832 1136694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:14:58.964936 1136694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:14:58.974974 1136694 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 15:14:58.975030 1136694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:14:58.984671 1136694 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:14:58.984690 1136694 kubeadm.go:157] found existing configuration files:
	
	I1026 15:14:58.984738 1136694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:14:58.993947 1136694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:14:58.994022 1136694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:14:59.001940 1136694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:14:59.010070 1136694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:14:59.010118 1136694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:14:59.017831 1136694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:14:59.026258 1136694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:14:59.026323 1136694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:14:59.034610 1136694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:14:59.043384 1136694 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:14:59.043450 1136694 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:14:59.051761 1136694 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 15:14:59.097672 1136694 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:14:59.097748 1136694 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:14:59.122290 1136694 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 15:14:59.122396 1136694 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 15:14:59.122452 1136694 kubeadm.go:318] OS: Linux
	I1026 15:14:59.122551 1136694 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 15:14:59.122651 1136694 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 15:14:59.122726 1136694 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 15:14:59.122833 1136694 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 15:14:59.122901 1136694 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 15:14:59.122990 1136694 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 15:14:59.123081 1136694 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 15:14:59.123146 1136694 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 15:14:59.188421 1136694 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:14:59.188558 1136694 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:14:59.188685 1136694 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:14:59.199526 1136694 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:14:59.288433 1136694 out.go:252]   - Generating certificates and keys ...
	I1026 15:14:59.288607 1136694 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:14:59.288709 1136694 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:14:59.579438 1136694 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:14:59.788772 1136694 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:15:00.087854 1136694 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:15:00.898819 1136694 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:15:01.342723 1136694 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:15:01.343706 1136694 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-498531 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:14:57.316754 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:57.316787 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:57.316796 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:57.316803 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:57.316808 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:14:57.316813 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:57.316817 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:57.316820 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:57.316823 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:57.316826 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:57.316846 1131084 retry.go:31] will retry after 1.30566337s: missing components: kube-dns
	I1026 15:14:58.628420 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:14:58.628462 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:14:58.628475 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:14:58.628486 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:14:58.628491 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:14:58.628499 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:14:58.628504 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:14:58.628508 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:14:58.628511 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:14:58.628514 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:14:58.628531 1131084 retry.go:31] will retry after 1.632507044s: missing components: kube-dns
	I1026 15:15:00.274975 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:15:00.275017 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:15:00.275033 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:15:00.275046 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:15:00.275052 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:15:00.275059 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:15:00.275064 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:15:00.275072 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:15:00.275077 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:15:00.275081 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:15:00.275101 1131084 retry.go:31] will retry after 1.463967354s: missing components: kube-dns
	I1026 15:15:01.744778 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:15:01.744819 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:15:01.744830 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:15:01.744841 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:15:01.744847 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:15:01.744854 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:15:01.744859 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:15:01.744866 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:15:01.744871 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:15:01.744878 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:15:01.744925 1131084 retry.go:31] will retry after 2.74736786s: missing components: kube-dns
	I1026 15:15:01.590933 1136694 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:15:01.591114 1136694 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-498531 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 15:15:01.808275 1136694 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:15:02.520741 1136694 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:15:02.725101 1136694 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:15:02.725284 1136694 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:15:02.756033 1136694 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:15:03.315066 1136694 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:15:03.768933 1136694 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:15:04.198586 1136694 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:15:04.707055 1136694 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:15:04.707751 1136694 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:15:04.713567 1136694 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:15:04.717011 1136694 out.go:252]   - Booting up control plane ...
	I1026 15:15:04.717141 1136694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:15:04.717253 1136694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:15:04.717339 1136694 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:15:04.733293 1136694 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:15:04.733436 1136694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:15:04.741630 1136694 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:15:04.742121 1136694 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:15:04.742204 1136694 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:15:04.856462 1136694 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:15:04.856604 1136694 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:15:06.358282 1136694 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501872091s
	I1026 15:15:06.362269 1136694 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:15:06.362384 1136694 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1026 15:15:06.363139 1136694 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:15:06.363282 1136694 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:15:04.497950 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:15:04.497990 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:15:04.498001 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:15:04.498012 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:15:04.498017 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:15:04.498023 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:15:04.498029 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:15:04.498040 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:15:04.498047 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:15:04.498052 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:15:04.498073 1131084 retry.go:31] will retry after 2.268801194s: missing components: kube-dns
	I1026 15:15:06.774438 1131084 system_pods.go:86] 9 kube-system pods found
	I1026 15:15:06.774489 1131084 system_pods.go:89] "calico-kube-controllers-59556d9b4c-xthm4" [e78c7e62-57f7-4dc3-a179-1e780bcfa76a] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 15:15:06.774501 1131084 system_pods.go:89] "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 15:15:06.774509 1131084 system_pods.go:89] "coredns-66bc5c9577-nsh99" [f5c4a462-d258-4858-b9cd-d0321bc9a237] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:15:06.774513 1131084 system_pods.go:89] "etcd-calico-498531" [18e0c7eb-ae67-48ae-a277-f66a4d0a270a] Running
	I1026 15:15:06.774518 1131084 system_pods.go:89] "kube-apiserver-calico-498531" [b58ab758-120c-4a7a-8994-e91d0c8811f3] Running
	I1026 15:15:06.774523 1131084 system_pods.go:89] "kube-controller-manager-calico-498531" [838165c1-7cc8-4272-a697-021f2dd1e995] Running
	I1026 15:15:06.774536 1131084 system_pods.go:89] "kube-proxy-lj2pk" [1da2639f-45c7-4f0d-8afa-d6c5b4022c05] Running
	I1026 15:15:06.774543 1131084 system_pods.go:89] "kube-scheduler-calico-498531" [d0cb1e26-dbde-4325-818b-03d2e40ca925] Running
	I1026 15:15:06.774560 1131084 system_pods.go:89] "storage-provisioner" [09763bce-fecf-4de1-a049-535b8b8fe334] Running
	I1026 15:15:06.774591 1131084 retry.go:31] will retry after 3.616247382s: missing components: kube-dns
	
	
	==> CRI-O <==
	Oct 26 15:14:22 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:22.994391493Z" level=info msg="Started container" PID=1712 containerID=cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper id=81329831-40a1-441a-83d9-9415324a76e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=34387a55654c14b7f464c1c7762f7b6d6d871d5cdc395dafc14b0ff7863efddc
	Oct 26 15:14:23 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:23.88467308Z" level=info msg="Removing container: 753d12c04fbccbccdd889b52912cb9703b66ac2088032f0111ccb5b54e922476" id=ec2888de-258a-4301-bdb1-72e50153d008 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:23 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:23.896597337Z" level=info msg="Removed container 753d12c04fbccbccdd889b52912cb9703b66ac2088032f0111ccb5b54e922476: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper" id=ec2888de-258a-4301-bdb1-72e50153d008 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.798747691Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=134c4e67-c79f-42f3-a4e9-54259d910291 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.799950919Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a80dc6b-3e48-4b0e-b71e-aff25d7a0c60 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.801333985Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper" id=2b7444ac-1e6a-4577-9674-63d7ca18507b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.801504499Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.808895226Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.80944952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.839404614Z" level=info msg="Created container 5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper" id=2b7444ac-1e6a-4577-9674-63d7ca18507b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.840107144Z" level=info msg="Starting container: 5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737" id=24b61505-5cc0-4da1-91a0-f6d356366d77 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.842144936Z" level=info msg="Started container" PID=1722 containerID=5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper id=24b61505-5cc0-4da1-91a0-f6d356366d77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=34387a55654c14b7f464c1c7762f7b6d6d871d5cdc395dafc14b0ff7863efddc
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.927533412Z" level=info msg="Removing container: cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390" id=576507a1-a62b-453d-9bbd-0525fd7678af name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:39 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:39.938305107Z" level=info msg="Removed container cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2/dashboard-metrics-scraper" id=576507a1-a62b-453d-9bbd-0525fd7678af name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.93899039Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9f43c39b-21a8-4b36-bd4b-840425082972 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.940184582Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=732e950a-053a-492a-860f-dcb29824af4a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.941602049Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5fd3016f-dec9-4c07-8aad-ab005de4efab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.941721164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.948979423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.949231837Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/11946fff12ca4773a24858d9db1d5667cc25d017920fa754b18391febc453eb3/merged/etc/passwd: no such file or directory"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.949269909Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/11946fff12ca4773a24858d9db1d5667cc25d017920fa754b18391febc453eb3/merged/etc/group: no such file or directory"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.949606202Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.984373337Z" level=info msg="Created container a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1: kube-system/storage-provisioner/storage-provisioner" id=5fd3016f-dec9-4c07-8aad-ab005de4efab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.985633349Z" level=info msg="Starting container: a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1" id=fdf149f0-5a16-48b9-8a3b-d1cefa7fa19a name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 15:14:42 default-k8s-diff-port-790012 crio[564]: time="2025-10-26T15:14:42.989282695Z" level=info msg="Started container" PID=1736 containerID=a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1 description=kube-system/storage-provisioner/storage-provisioner id=fdf149f0-5a16-48b9-8a3b-d1cefa7fa19a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b90970ccbc9114d9579b29545c1c86e16c47264be856f1717ae35bcd218f12b8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a7481686d0bee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   b90970ccbc911       storage-provisioner                                    kube-system
	5b66042dae93c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   34387a55654c1       dashboard-metrics-scraper-6ffb444bf9-kfgm2             kubernetes-dashboard
	f7bce916e5757       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   68cdea23a0dde       kubernetes-dashboard-855c9754f9-pj966                  kubernetes-dashboard
	42b1e3115d00b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   fa7798d815ab4       busybox                                                default
	cdcd33a110ab7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   f138c76ccf852       coredns-66bc5c9577-shw6l                               kube-system
	340c4006e10f1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           58 seconds ago       Running             kube-proxy                  0                   a834e70122398       kube-proxy-wk2nn                                       kube-system
	86dd13cec7ebd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   b90970ccbc911       storage-provisioner                                    kube-system
	cffe05dde621a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   dacd3b2bdc152       kindnet-7ch5r                                          kube-system
	a2d02679a51ed       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   7e344794aed06       kube-controller-manager-default-k8s-diff-port-790012   kube-system
	facf1cc394076       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   1b3392b163689       kube-apiserver-default-k8s-diff-port-790012            kube-system
	35d0d03944a78       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   31ef44d12f792       kube-scheduler-default-k8s-diff-port-790012            kube-system
	8aa809c39193f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   5115d9ab3f888       etcd-default-k8s-diff-port-790012                      kube-system
	
	
	==> coredns [cdcd33a110ab72d97c137eac4a12dab06a6293ca167a79ea2a1ec28b0b18ccdc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54479 - 8434 "HINFO IN 4125302407912520646.1330475005463334414. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.055089013s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-790012
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-790012
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=default-k8s-diff-port-790012
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_13_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:13:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-790012
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:15:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:14:42 +0000   Sun, 26 Oct 2025 15:13:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:14:42 +0000   Sun, 26 Oct 2025 15:13:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:14:42 +0000   Sun, 26 Oct 2025 15:13:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:14:42 +0000   Sun, 26 Oct 2025 15:13:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-790012
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                fc981cf4-4aaf-42bf-b320-22476764867d
	  Boot ID:                    e70b7d4e-400a-47f5-8079-e2e0047e8598
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-shw6l                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-default-k8s-diff-port-790012                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-7ch5r                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-790012             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-790012    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-wk2nn                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-790012             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kfgm2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pj966                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 109s                 kube-proxy       
	  Normal  Starting                 57s                  kube-proxy       
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                 node-controller  Node default-k8s-diff-port-790012 event: Registered Node default-k8s-diff-port-790012 in Controller
	  Normal  NodeReady                99s                  kubelet          Node default-k8s-diff-port-790012 status is now: NodeReady
	  Normal  Starting                 62s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node default-k8s-diff-port-790012 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                  node-controller  Node default-k8s-diff-port-790012 event: Registered Node default-k8s-diff-port-790012 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a aa 88 29 0d b3 08 06
	[  +0.000423] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fe 35 ab d8 59 96 08 06
	[ +13.995664] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[Oct26 13:57] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.142653] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	[  +0.001867] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 11 1f 08 b1 22 08 06
	[  +1.203813] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 99 91 1d d2 2e 08 06
	[  +0.000377] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a2 9a ef 92 46 08 06
	[ +21.331967] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 9a 04 7c 08 66 08 06
	[  +0.000411] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae 3d 81 29 d1 8b 08 06
	[  +0.000592] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 6d bf f0 af 6b 08 06
	[Oct26 13:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 5b 4b 78 cc 44 08 06
	[  +0.000933] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 ac 40 84 1e 67 08 06
	
	
	==> etcd [8aa809c39193fbb83582e34b6983bd3f1e5fe7760c1faafff728462dd1913646] <==
	{"level":"warn","ts":"2025-10-26T15:14:10.671461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.680592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.692663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.702949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.711286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.720228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.728567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.738215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.756438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.768286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.784142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.794787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.804008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.815294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.827884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.836260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.843697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.851641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.861719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.883223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.892308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.908738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.916641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.923759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:14:10.977138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39902","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 15:15:10 up  2:57,  0 user,  load average: 4.79, 3.41, 2.16
	Linux default-k8s-diff-port-790012 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cffe05dde621ab9582c7dd3cc9f6894fcec1d0b54f1ed7baf19f6154e397b609] <==
	I1026 15:14:12.466898       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 15:14:12.467443       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 15:14:12.467729       1 main.go:148] setting mtu 1500 for CNI 
	I1026 15:14:12.467802       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 15:14:12.467835       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T15:14:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 15:14:12.762637       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 15:14:12.762699       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 15:14:12.762719       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 15:14:12.766415       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 15:14:13.164946       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 15:14:13.164976       1 metrics.go:72] Registering metrics
	I1026 15:14:13.165053       1 controller.go:711] "Syncing nftables rules"
	I1026 15:14:22.763464       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:14:22.763563       1 main.go:301] handling current node
	I1026 15:14:32.763452       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:14:32.763484       1 main.go:301] handling current node
	I1026 15:14:42.763283       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:14:42.763366       1 main.go:301] handling current node
	I1026 15:14:52.763102       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:14:52.763143       1 main.go:301] handling current node
	I1026 15:15:02.763427       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1026 15:15:02.763486       1 main.go:301] handling current node
	
	
	==> kube-apiserver [facf1cc394076aaa508c872a3c8c00a3efde72f036be55b7af624017d37ce6a3] <==
	I1026 15:14:11.664878       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 15:14:11.674280       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 15:14:11.675096       1 policy_source.go:240] refreshing policies
	E1026 15:14:11.690079       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:14:11.694380       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 15:14:11.695664       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 15:14:11.702049       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:14:11.713156       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:14:11.714696       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:14:11.713203       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:14:11.713177       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 15:14:11.722901       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:14:11.730246       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:14:11.733472       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 15:14:11.920484       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:14:12.178095       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 15:14:12.279999       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:14:12.320796       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:14:12.335264       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:14:12.440548       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.159.132"}
	I1026 15:14:12.459981       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.149.108"}
	I1026 15:14:12.597109       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:14:15.188975       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:14:15.586956       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:14:15.690694       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a2d02679a51ed33ad3086b27a58279d82b4d1c6bd035050764df771a3b17cf2c] <==
	I1026 15:14:15.208709       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 15:14:15.210859       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 15:14:15.213378       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 15:14:15.218724       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 15:14:15.218863       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 15:14:15.218975       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-790012"
	I1026 15:14:15.219046       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 15:14:15.220067       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:14:15.220140       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:14:15.220204       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:14:15.220276       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:14:15.220283       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:14:15.224415       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 15:14:15.226997       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 15:14:15.229653       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:14:15.231807       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 15:14:15.234528       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 15:14:15.234556       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:14:15.234626       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:14:15.234649       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 15:14:15.234652       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:14:15.234639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 15:14:15.234679       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:14:15.234687       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:14:15.261752       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [340c4006e10f18fc87ad00cf77d818fadf1aab8a4c9b92d33498730d7f4e711d] <==
	I1026 15:14:12.375660       1 server_linux.go:53] "Using iptables proxy"
	I1026 15:14:12.453803       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:14:12.554825       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:14:12.554884       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 15:14:12.555065       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:14:12.584326       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 15:14:12.584382       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:14:12.590845       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:14:12.591413       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:14:12.591803       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:14:12.594081       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:14:12.594151       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:14:12.594303       1 config.go:200] "Starting service config controller"
	I1026 15:14:12.594756       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:14:12.594357       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:14:12.594852       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:14:12.594527       1 config.go:309] "Starting node config controller"
	I1026 15:14:12.594911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:14:12.594936       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:14:12.694581       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:14:12.695736       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:14:12.695802       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [35d0d03944a78ecf21c8c3291224fdd9f405cd21a6e29cd4d3096bc1744575bb] <==
	I1026 15:14:10.609715       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:14:11.594943       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:14:11.594997       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:14:11.595010       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:14:11.595019       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:14:11.672137       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:14:11.672893       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:14:11.684217       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:14:11.684402       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:14:11.687605       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:14:11.687708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:14:11.785959       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:14:15 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:15.941241     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bdfb4cb1-9363-4e8a-8424-ffd6e9068e49-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-kfgm2\" (UID: \"bdfb4cb1-9363-4e8a-8424-ffd6e9068e49\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2"
	Oct 26 15:14:15 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:15.941279     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3c881e80-fe95-4d71-aff2-be956290436b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-pj966\" (UID: \"3c881e80-fe95-4d71-aff2-be956290436b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pj966"
	Oct 26 15:14:19 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:19.866810     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:14:19 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:19.896474     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pj966" podStartSLOduration=1.311487641 podStartE2EDuration="4.896443975s" podCreationTimestamp="2025-10-26 15:14:15 +0000 UTC" firstStartedPulling="2025-10-26 15:14:16.14984168 +0000 UTC m=+7.460697321" lastFinishedPulling="2025-10-26 15:14:19.734798017 +0000 UTC m=+11.045653655" observedRunningTime="2025-10-26 15:14:19.881423471 +0000 UTC m=+11.192279118" watchObservedRunningTime="2025-10-26 15:14:19.896443975 +0000 UTC m=+11.207299624"
	Oct 26 15:14:22 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:22.878149     721 scope.go:117] "RemoveContainer" containerID="753d12c04fbccbccdd889b52912cb9703b66ac2088032f0111ccb5b54e922476"
	Oct 26 15:14:23 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:23.882879     721 scope.go:117] "RemoveContainer" containerID="753d12c04fbccbccdd889b52912cb9703b66ac2088032f0111ccb5b54e922476"
	Oct 26 15:14:23 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:23.883064     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:23 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:23.883300     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:24 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:24.888549     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:24 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:24.888841     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:25 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:25.890752     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:25 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:25.890964     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:39 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:39.798129     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:39 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:39.926089     721 scope.go:117] "RemoveContainer" containerID="cdbfd20ef6c053a16e047726ea87d829905474a03ff3d7deea21f260a640d390"
	Oct 26 15:14:39 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:39.926350     721 scope.go:117] "RemoveContainer" containerID="5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	Oct 26 15:14:39 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:39.926558     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:42 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:42.938497     721 scope.go:117] "RemoveContainer" containerID="86dd13cec7ebd6e740152fe44eb9f68d18517a514d6d9e9b154243c9372b9e3e"
	Oct 26 15:14:43 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:43.971489     721 scope.go:117] "RemoveContainer" containerID="5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	Oct 26 15:14:43 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:43.971704     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:14:58 default-k8s-diff-port-790012 kubelet[721]: I1026 15:14:58.799820     721 scope.go:117] "RemoveContainer" containerID="5b66042dae93c2be0b0c8e834cd5991a9a551117f3afe526e424fb409f564737"
	Oct 26 15:14:58 default-k8s-diff-port-790012 kubelet[721]: E1026 15:14:58.800047     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kfgm2_kubernetes-dashboard(bdfb4cb1-9363-4e8a-8424-ffd6e9068e49)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kfgm2" podUID="bdfb4cb1-9363-4e8a-8424-ffd6e9068e49"
	Oct 26 15:15:04 default-k8s-diff-port-790012 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 15:15:04 default-k8s-diff-port-790012 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 15:15:04 default-k8s-diff-port-790012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 15:15:04 default-k8s-diff-port-790012 systemd[1]: kubelet.service: Consumed 1.922s CPU time.
	
	
	==> kubernetes-dashboard [f7bce916e5757f41f13bbf128728404ae709bb2ac55795cf3f137d9120b46fdf] <==
	2025/10/26 15:14:19 Using namespace: kubernetes-dashboard
	2025/10/26 15:14:19 Using in-cluster config to connect to apiserver
	2025/10/26 15:14:19 Using secret token for csrf signing
	2025/10/26 15:14:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 15:14:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 15:14:19 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 15:14:19 Generating JWE encryption key
	2025/10/26 15:14:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 15:14:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 15:14:20 Initializing JWE encryption key from synchronized object
	2025/10/26 15:14:20 Creating in-cluster Sidecar client
	2025/10/26 15:14:20 Serving insecurely on HTTP port: 9090
	2025/10/26 15:14:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:14:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 15:14:19 Starting overwatch
	
	
	==> storage-provisioner [86dd13cec7ebd6e740152fe44eb9f68d18517a514d6d9e9b154243c9372b9e3e] <==
	I1026 15:14:12.321631       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:14:42.324724       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a7481686d0bee57dc98cb02e1b36088f9ec209bfc7d121c41d4c799bc73c0ba1] <==
	I1026 15:14:43.037228       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 15:14:43.037278       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 15:14:43.040295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:46.496313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:50.757358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:54.356488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:14:57.410719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:00.433664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:00.439226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:15:00.439420       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 15:15:00.439634       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-790012_99c196a5-4ea9-4bdb-b35a-422f67aaad19!
	I1026 15:15:00.439882       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0ccb008-188e-4240-a93f-ef906d571508", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-790012_99c196a5-4ea9-4bdb-b35a-422f67aaad19 became leader
	W1026 15:15:00.442472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:00.446893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 15:15:00.540588       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-790012_99c196a5-4ea9-4bdb-b35a-422f67aaad19!
	W1026 15:15:02.450363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:02.456440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:04.459774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:04.537214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:06.541728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:06.546879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:08.552287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:08.559321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:10.563828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:15:10.569313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012: exit status 2 (380.845717ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-790012 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.83s)
E1026 15:16:13.232721  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.18
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.82
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.85
22 TestOffline 63.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 136.89
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 6.45
48 TestAddons/StoppedEnableDisable 18.54
49 TestCertOptions 26.28
50 TestCertExpiration 215.51
52 TestForceSystemdFlag 28.76
53 TestForceSystemdEnv 27.36
58 TestErrorSpam/setup 23.8
59 TestErrorSpam/start 0.7
60 TestErrorSpam/status 0.99
61 TestErrorSpam/pause 6
62 TestErrorSpam/unpause 6.04
63 TestErrorSpam/stop 8.13
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.27
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 14.84
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.82
75 TestFunctional/serial/CacheCmd/cache/add_local 1.2
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 67.97
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.29
86 TestFunctional/serial/LogsFileCmd 1.31
87 TestFunctional/serial/InvalidService 3.99
89 TestFunctional/parallel/ConfigCmd 0.57
91 TestFunctional/parallel/DryRun 0.39
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 0.98
98 TestFunctional/parallel/AddonsCmd 0.18
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 1.85
104 TestFunctional/parallel/FileSync 0.3
105 TestFunctional/parallel/CertSync 1.74
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
113 TestFunctional/parallel/License 0.39
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.24
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
127 TestFunctional/parallel/ProfileCmd/profile_list 0.4
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
129 TestFunctional/parallel/MountCmd/any-port 61.84
130 TestFunctional/parallel/MountCmd/specific-port 1.79
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
132 TestFunctional/parallel/Version/short 0.07
133 TestFunctional/parallel/Version/components 0.5
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
138 TestFunctional/parallel/ImageCommands/ImageBuild 2.32
139 TestFunctional/parallel/ImageCommands/Setup 0.96
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
150 TestFunctional/parallel/ServiceCmd/List 1.72
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 106.09
163 TestMultiControlPlane/serial/DeployApp 3.86
164 TestMultiControlPlane/serial/PingHostFromPods 1.09
165 TestMultiControlPlane/serial/AddWorkerNode 25.15
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
168 TestMultiControlPlane/serial/CopyFile 17.99
169 TestMultiControlPlane/serial/StopSecondaryNode 19.14
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.15
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 193.37
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.69
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 46.75
177 TestMultiControlPlane/serial/RestartCluster 54.96
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
179 TestMultiControlPlane/serial/AddSecondaryNode 48.08
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
184 TestJSONOutput/start/Command 38.78
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 6.07
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.23
209 TestKicCustomNetwork/create_custom_network 27.2
210 TestKicCustomNetwork/use_default_bridge_network 24
211 TestKicExistingNetwork 24.9
212 TestKicCustomSubnet 26.78
213 TestKicStaticIP 24.76
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 50.24
218 TestMountStart/serial/StartWithMountFirst 8.82
219 TestMountStart/serial/VerifyMountFirst 0.29
220 TestMountStart/serial/StartWithMountSecond 8.08
221 TestMountStart/serial/VerifyMountSecond 0.28
222 TestMountStart/serial/DeleteFirst 1.72
223 TestMountStart/serial/VerifyMountPostDelete 0.28
224 TestMountStart/serial/Stop 1.27
225 TestMountStart/serial/RestartStopped 7.2
226 TestMountStart/serial/VerifyMountPostStop 0.28
229 TestMultiNode/serial/FreshStart2Nodes 90.84
230 TestMultiNode/serial/DeployApp2Nodes 3.29
231 TestMultiNode/serial/PingHostFrom2Pods 0.76
232 TestMultiNode/serial/AddNode 27.82
233 TestMultiNode/serial/MultiNodeLabels 0.07
234 TestMultiNode/serial/ProfileList 0.69
235 TestMultiNode/serial/CopyFile 10.22
236 TestMultiNode/serial/StopNode 2.31
237 TestMultiNode/serial/StartAfterStop 7.31
238 TestMultiNode/serial/RestartKeepsNodes 79.18
239 TestMultiNode/serial/DeleteNode 5.34
240 TestMultiNode/serial/StopMultiNode 30.42
241 TestMultiNode/serial/RestartMultiNode 29.82
242 TestMultiNode/serial/ValidateNameConflict 24.96
247 TestPreload 106.65
249 TestScheduledStopUnix 97.88
252 TestInsufficientStorage 9.86
253 TestRunningBinaryUpgrade 53.69
255 TestKubernetesUpgrade 312.86
256 TestMissingContainerUpgrade 93.16
258 TestPause/serial/Start 85.74
259 TestPause/serial/SecondStartNoReconfiguration 6.51
261 TestStoppedBinaryUpgrade/Setup 0.55
262 TestStoppedBinaryUpgrade/Upgrade 44.7
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
272 TestNoKubernetes/serial/StartWithK8s 24.63
280 TestNetworkPlugins/group/false 4.53
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
285 TestNoKubernetes/serial/StartWithStopK8s 17.69
286 TestNoKubernetes/serial/Start 5.86
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
288 TestNoKubernetes/serial/ProfileList 14.62
289 TestNoKubernetes/serial/Stop 1.31
290 TestNoKubernetes/serial/StartNoArgs 6.57
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
293 TestStartStop/group/old-k8s-version/serial/FirstStart 51.7
295 TestStartStop/group/no-preload/serial/FirstStart 50.09
296 TestStartStop/group/old-k8s-version/serial/DeployApp 7.26
297 TestStartStop/group/no-preload/serial/DeployApp 7.23
299 TestStartStop/group/old-k8s-version/serial/Stop 16.16
301 TestStartStop/group/no-preload/serial/Stop 16.26
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
303 TestStartStop/group/old-k8s-version/serial/SecondStart 47.49
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
305 TestStartStop/group/no-preload/serial/SecondStart 49.74
307 TestStartStop/group/embed-certs/serial/FirstStart 40.33
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 47.14
319 TestStartStop/group/newest-cni/serial/FirstStart 28.66
320 TestStartStop/group/embed-certs/serial/DeployApp 8.26
321 TestNetworkPlugins/group/auto/Start 43.06
323 TestStartStop/group/embed-certs/serial/Stop 16.39
324 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/Stop 8
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
328 TestStartStop/group/embed-certs/serial/SecondStart 54.36
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
330 TestStartStop/group/newest-cni/serial/SecondStart 11.7
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.28
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.18
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
338 TestNetworkPlugins/group/auto/KubeletFlags 0.39
339 TestNetworkPlugins/group/auto/NetCatPod 9.26
340 TestNetworkPlugins/group/kindnet/Start 40.25
341 TestNetworkPlugins/group/auto/DNS 0.11
342 TestNetworkPlugins/group/auto/Localhost 0.09
343 TestNetworkPlugins/group/auto/HairPin 0.09
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.74
346 TestNetworkPlugins/group/calico/Start 50.56
347 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
348 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
353 TestNetworkPlugins/group/kindnet/NetCatPod 9.32
354 TestNetworkPlugins/group/custom-flannel/Start 54.85
355 TestNetworkPlugins/group/kindnet/DNS 0.13
356 TestNetworkPlugins/group/kindnet/Localhost 0.1
357 TestNetworkPlugins/group/kindnet/HairPin 0.1
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
359 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
360 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/enable-default-cni/Start 70.39
364 TestNetworkPlugins/group/flannel/Start 53.16
365 TestNetworkPlugins/group/calico/KubeletFlags 0.43
366 TestNetworkPlugins/group/calico/NetCatPod 12.09
367 TestNetworkPlugins/group/calico/DNS 0.12
368 TestNetworkPlugins/group/calico/Localhost 0.1
369 TestNetworkPlugins/group/calico/HairPin 0.1
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
372 TestNetworkPlugins/group/custom-flannel/DNS 0.14
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
375 TestNetworkPlugins/group/bridge/Start 66.6
376 TestNetworkPlugins/group/flannel/ControllerPod 6
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
378 TestNetworkPlugins/group/flannel/NetCatPod 9.18
379 TestNetworkPlugins/group/flannel/DNS 0.11
380 TestNetworkPlugins/group/flannel/Localhost 0.09
381 TestNetworkPlugins/group/flannel/HairPin 0.09
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
388 TestNetworkPlugins/group/bridge/NetCatPod 9.18
389 TestNetworkPlugins/group/bridge/DNS 0.11
390 TestNetworkPlugins/group/bridge/Localhost 0.09
391 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (4.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-313763 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-313763 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.183509916s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1026 14:14:10.668230  845095 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1026 14:14:10.668330  845095 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-313763
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-313763: exit status 85 (77.926783ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-313763 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-313763 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:06.539456  845107 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:06.539585  845107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:06.539597  845107 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:06.539603  845107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:06.539847  845107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	W1026 14:14:06.539996  845107 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21664-841519/.minikube/config/config.json: open /home/jenkins/minikube-integration/21664-841519/.minikube/config/config.json: no such file or directory
	I1026 14:14:06.540535  845107 out.go:368] Setting JSON to true
	I1026 14:14:06.541525  845107 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6995,"bootTime":1761481052,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:14:06.541626  845107 start.go:141] virtualization: kvm guest
	I1026 14:14:06.543903  845107 out.go:99] [download-only-313763] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1026 14:14:06.544065  845107 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball: no such file or directory
	I1026 14:14:06.544097  845107 notify.go:220] Checking for updates...
	I1026 14:14:06.545366  845107 out.go:171] MINIKUBE_LOCATION=21664
	I1026 14:14:06.546705  845107 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:06.547886  845107 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:14:06.549019  845107 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:14:06.550193  845107 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 14:14:06.552615  845107 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 14:14:06.552930  845107 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:06.577719  845107 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:14:06.577804  845107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:06.638343  845107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-26 14:14:06.62737803 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:06.638446  845107 docker.go:318] overlay module found
	I1026 14:14:06.640122  845107 out.go:99] Using the docker driver based on user configuration
	I1026 14:14:06.640158  845107 start.go:305] selected driver: docker
	I1026 14:14:06.640179  845107 start.go:925] validating driver "docker" against <nil>
	I1026 14:14:06.640282  845107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:06.693697  845107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-26 14:14:06.684531138 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:06.693884  845107 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:06.694432  845107 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1026 14:14:06.694585  845107 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 14:14:06.696286  845107 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-313763 host does not exist
	  To start a cluster, run: "minikube start -p download-only-313763"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-313763
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-008452 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-008452 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.819249595s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1026 14:14:14.949831  845095 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1026 14:14:14.949880  845095 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-841519/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-008452
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-008452: exit status 85 (78.315769ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-313763 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-313763 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ delete  │ -p download-only-313763                                                                                                                                                   │ download-only-313763 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │ 26 Oct 25 14:14 UTC │
	│ start   │ -o=json --download-only -p download-only-008452 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-008452 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:11.185690  845462 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:11.185945  845462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:11.185953  845462 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:11.185957  845462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:11.186175  845462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:14:11.186667  845462 out.go:368] Setting JSON to true
	I1026 14:14:11.187559  845462 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6999,"bootTime":1761481052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:14:11.187655  845462 start.go:141] virtualization: kvm guest
	I1026 14:14:11.189525  845462 out.go:99] [download-only-008452] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:14:11.189687  845462 notify.go:220] Checking for updates...
	I1026 14:14:11.191134  845462 out.go:171] MINIKUBE_LOCATION=21664
	I1026 14:14:11.192442  845462 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:11.193714  845462 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:14:11.195022  845462 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:14:11.196357  845462 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 14:14:11.198750  845462 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 14:14:11.198973  845462 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:11.222099  845462 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:14:11.222185  845462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:11.280152  845462 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-26 14:14:11.269982091 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:11.280329  845462 docker.go:318] overlay module found
	I1026 14:14:11.282101  845462 out.go:99] Using the docker driver based on user configuration
	I1026 14:14:11.282132  845462 start.go:305] selected driver: docker
	I1026 14:14:11.282138  845462 start.go:925] validating driver "docker" against <nil>
	I1026 14:14:11.282239  845462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:14:11.337888  845462 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-26 14:14:11.327887282 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:14:11.338070  845462 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:11.338662  845462 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1026 14:14:11.338842  845462 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 14:14:11.340879  845462 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-008452 host does not exist
	  To start a cluster, run: "minikube start -p download-only-008452"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-008452
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-939440 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-939440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-939440
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
I1026 14:14:16.140891  845095 binary.go:78] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-114305 --alsologtostderr --binary-mirror http://127.0.0.1:44689 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-114305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-114305
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestOffline (63.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-100892 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-100892 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m0.976091378s)
helpers_test.go:175: Cleaning up "offline-crio-100892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-100892
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-100892: (2.443435974s)
--- PASS: TestOffline (63.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-459729
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-459729: exit status 85 (67.568531ms)

                                                
                                                
-- stdout --
	* Profile "addons-459729" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-459729"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-459729
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-459729: exit status 85 (68.160889ms)

                                                
                                                
-- stdout --
	* Profile "addons-459729" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-459729"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (136.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-459729 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-459729 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m16.889464793s)
--- PASS: TestAddons/Setup (136.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-459729 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-459729 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (6.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-459729 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-459729 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [34ab5631-8a88-449f-95bb-06d39c99c9a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [34ab5631-8a88-449f-95bb-06d39c99c9a5] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 6.003625212s
addons_test.go:694: (dbg) Run:  kubectl --context addons-459729 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-459729 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-459729 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (6.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-459729
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-459729: (18.238165318s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-459729
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-459729
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-459729
--- PASS: TestAddons/StoppedEnableDisable (18.54s)

                                                
                                    
x
+
TestCertOptions (26.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-124833 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-124833 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.734467356s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-124833 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-124833 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-124833 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-124833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-124833
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-124833: (3.827352219s)
--- PASS: TestCertOptions (26.28s)

                                                
                                    
x
+
TestCertExpiration (215.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-619245 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-619245 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.203962187s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-619245 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-619245 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (8.739357355s)
helpers_test.go:175: Cleaning up "cert-expiration-619245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-619245
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-619245: (2.570965102s)
--- PASS: TestCertExpiration (215.51s)

                                                
                                    
x
+
TestForceSystemdFlag (28.76s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-391593 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-391593 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.014747154s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-391593 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-391593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-391593
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-391593: (2.446898859s)
--- PASS: TestForceSystemdFlag (28.76s)

                                                
                                    
x
+
TestForceSystemdEnv (27.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-305078 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-305078 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.855501003s)
helpers_test.go:175: Cleaning up "force-systemd-env-305078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-305078
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-305078: (2.50318597s)
--- PASS: TestForceSystemdEnv (27.36s)

                                                
                                    
x
+
TestErrorSpam/setup (23.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-463470 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-463470 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-463470 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-463470 --driver=docker  --container-runtime=crio: (23.80380448s)
--- PASS: TestErrorSpam/setup (23.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 pause: exit status 80 (2.232372377s)

                                                
                                                
-- stdout --
	* Pausing node nospam-463470 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:25:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 pause: exit status 80 (2.027858677s)

                                                
                                                
-- stdout --
	* Pausing node nospam-463470 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:25:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 pause: exit status 80 (1.744006162s)

                                                
                                                
-- stdout --
	* Pausing node nospam-463470 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:25:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.00s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.04s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 unpause: exit status 80 (2.072184057s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-463470 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:25:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 unpause: exit status 80 (2.044759627s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-463470 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:25:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 unpause: exit status 80 (1.923368432s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-463470 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T14:25:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.04s)

                                                
                                    
x
+
TestErrorSpam/stop (8.13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 stop: (7.919118777s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-463470 --log_dir /tmp/nospam-463470 stop
--- PASS: TestErrorSpam/stop (8.13s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21664-841519/.minikube/files/etc/test/nested/copy/845095/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-656017 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1026 14:26:34.528734  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:34.535210  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:34.546616  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:34.568027  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:34.609577  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:34.691296  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:34.852915  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:35.174699  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:35.816308  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:37.097907  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:39.660866  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:26:44.782395  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-656017 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.274310881s)
--- PASS: TestFunctional/serial/StartWithProxy (38.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (14.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1026 14:26:48.737147  845095 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-656017 --alsologtostderr -v=8
E1026 14:26:55.024684  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-656017 --alsologtostderr -v=8: (14.833892169s)
functional_test.go:678: soft start took 14.834762237s for "functional-656017" cluster.
I1026 14:27:03.571495  845095 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (14.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-656017 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-656017 /tmp/TestFunctionalserialCacheCmdcacheadd_local1245255966/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 cache add minikube-local-cache-test:functional-656017
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 cache delete minikube-local-cache-test:functional-656017
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-656017
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (301.07992ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 kubectl -- --context functional-656017 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-656017 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (67.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-656017 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1026 14:27:15.506253  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:27:56.467613  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-656017 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m7.966177335s)
functional_test.go:776: restart took 1m7.966321545s for "functional-656017" cluster.
I1026 14:28:18.137691  845095 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (67.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-656017 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-656017 logs: (1.293453693s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 logs --file /tmp/TestFunctionalserialLogsFileCmd3710731393/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-656017 logs --file /tmp/TestFunctionalserialLogsFileCmd3710731393/001/logs.txt: (1.308445827s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-656017 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-656017
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-656017: exit status 115 (358.399118ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31841 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-656017 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 config get cpus: exit status 14 (142.116637ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 config get cpus: exit status 14 (95.114344ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-656017 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-656017 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (168.0577ms)

                                                
                                                
-- stdout --
	* [functional-656017] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:29:41.538281  884710 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:29:41.538521  884710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.538529  884710 out.go:374] Setting ErrFile to fd 2...
	I1026 14:29:41.538534  884710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.538764  884710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:29:41.539210  884710 out.go:368] Setting JSON to false
	I1026 14:29:41.540251  884710 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7930,"bootTime":1761481052,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:29:41.540342  884710 start.go:141] virtualization: kvm guest
	I1026 14:29:41.542133  884710 out.go:179] * [functional-656017] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:29:41.543985  884710 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:29:41.544003  884710 notify.go:220] Checking for updates...
	I1026 14:29:41.546302  884710 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:29:41.547403  884710 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:29:41.548421  884710 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:29:41.549703  884710 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:29:41.550891  884710 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:29:41.552575  884710 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:29:41.553115  884710 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:29:41.576814  884710 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:29:41.576908  884710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.635496  884710 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.624418663 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.635608  884710 docker.go:318] overlay module found
	I1026 14:29:41.637296  884710 out.go:179] * Using the docker driver based on existing profile
	I1026 14:29:41.638564  884710 start.go:305] selected driver: docker
	I1026 14:29:41.638586  884710 start.go:925] validating driver "docker" against &{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.638690  884710 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:29:41.640689  884710 out.go:203] 
	W1026 14:29:41.642112  884710 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1026 14:29:41.643457  884710 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-656017 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-656017 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-656017 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (173.082077ms)

                                                
                                                
-- stdout --
	* [functional-656017] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:29:41.369249  884626 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:29:41.369367  884626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.369375  884626 out.go:374] Setting ErrFile to fd 2...
	I1026 14:29:41.369381  884626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:29:41.369681  884626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:29:41.370149  884626 out.go:368] Setting JSON to false
	I1026 14:29:41.371209  884626 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7929,"bootTime":1761481052,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:29:41.371328  884626 start.go:141] virtualization: kvm guest
	I1026 14:29:41.373482  884626 out.go:179] * [functional-656017] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1026 14:29:41.375452  884626 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:29:41.375517  884626 notify.go:220] Checking for updates...
	I1026 14:29:41.377780  884626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:29:41.379039  884626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 14:29:41.380182  884626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 14:29:41.381415  884626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:29:41.382650  884626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:29:41.384247  884626 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:29:41.384734  884626 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:29:41.408221  884626 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 14:29:41.408331  884626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:29:41.466652  884626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-26 14:29:41.456313247 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:29:41.466768  884626 docker.go:318] overlay module found
	I1026 14:29:41.468427  884626 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1026 14:29:41.469729  884626 start.go:305] selected driver: docker
	I1026 14:29:41.469745  884626 start.go:925] validating driver "docker" against &{Name:functional-656017 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656017 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:29:41.469847  884626 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:29:41.471667  884626 out.go:203] 
	W1026 14:29:41.473017  884626 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 14:29:41.474245  884626 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh -n functional-656017 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 cp functional-656017:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3939863884/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh -n functional-656017 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh -n functional-656017 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/845095/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo cat /etc/test/nested/copy/845095/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/845095.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo cat /etc/ssl/certs/845095.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/845095.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo cat /usr/share/ca-certificates/845095.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8450952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo cat /etc/ssl/certs/8450952.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8450952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo cat /usr/share/ca-certificates/8450952.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-656017 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 ssh "sudo systemctl is-active docker": exit status 1 (285.470856ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 ssh "sudo systemctl is-active containerd": exit status 1 (283.053588ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-656017 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-656017 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-656017 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 879771: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-656017 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-656017 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-656017 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d5bfca14-bbe9-439e-b76e-568c60a28734] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d5bfca14-bbe9-439e-b76e-568c60a28734] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003768912s
I1026 14:28:33.543956  845095 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-656017 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.150.87 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-656017 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "333.046222ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.100833ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "340.94941ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.509617ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (61.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdany-port2342765656/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761488914927877726" to /tmp/TestFunctionalparallelMountCmdany-port2342765656/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761488914927877726" to /tmp/TestFunctionalparallelMountCmdany-port2342765656/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761488914927877726" to /tmp/TestFunctionalparallelMountCmdany-port2342765656/001/test-1761488914927877726
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.043994ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 14:28:35.209241  845095 retry.go:31] will retry after 599.552169ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 14:28 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 14:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 14:28 test-1761488914927877726
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh cat /mount-9p/test-1761488914927877726
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-656017 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [41079232-dd53-46e6-b4c1-91c81f879e59] Pending
helpers_test.go:352: "busybox-mount" [41079232-dd53-46e6-b4c1-91c81f879e59] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1026 14:29:18.389437  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [41079232-dd53-46e6-b4c1-91c81f879e59] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [41079232-dd53-46e6-b4c1-91c81f879e59] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 59.003337185s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-656017 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdany-port2342765656/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (61.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdspecific-port3903809018/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (288.351385ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 14:29:37.052252  845095 retry.go:31] will retry after 446.739296ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdspecific-port3903809018/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 ssh "sudo umount -f /mount-9p": exit status 1 (280.14228ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-656017 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdspecific-port3903809018/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdVerifyCleanup688021677/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdVerifyCleanup688021677/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdVerifyCleanup688021677/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T" /mount1: exit status 1 (355.761967ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 14:29:38.905739  845095 retry.go:31] will retry after 520.85908ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-656017 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdVerifyCleanup688021677/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdVerifyCleanup688021677/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-656017 /tmp/TestFunctionalparallelMountCmdVerifyCleanup688021677/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-656017 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-656017 image ls --format short --alsologtostderr:
I1026 14:34:44.810519  889866 out.go:360] Setting OutFile to fd 1 ...
I1026 14:34:44.810791  889866 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:44.810801  889866 out.go:374] Setting ErrFile to fd 2...
I1026 14:34:44.810806  889866 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:44.811022  889866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
I1026 14:34:44.811638  889866 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:44.811737  889866 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:44.812123  889866 cli_runner.go:164] Run: docker container inspect functional-656017 --format={{.State.Status}}
I1026 14:34:44.831141  889866 ssh_runner.go:195] Run: systemctl --version
I1026 14:34:44.831213  889866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656017
I1026 14:34:44.848908  889866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33546 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/functional-656017/id_rsa Username:docker}
I1026 14:34:44.948198  889866 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-656017 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-656017  │ 5e07028687ec1 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-656017 image ls --format table --alsologtostderr:
I1026 14:34:47.826700  890570 out.go:360] Setting OutFile to fd 1 ...
I1026 14:34:47.827016  890570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:47.827027  890570 out.go:374] Setting ErrFile to fd 2...
I1026 14:34:47.827032  890570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:47.827251  890570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
I1026 14:34:47.827962  890570 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:47.828083  890570 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:47.828504  890570 cli_runner.go:164] Run: docker container inspect functional-656017 --format={{.State.Status}}
I1026 14:34:47.846950  890570 ssh_runner.go:195] Run: systemctl --version
I1026 14:34:47.847005  890570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656017
I1026 14:34:47.865345  890570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33546 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/functional-656017/id_rsa Username:docker}
I1026 14:34:47.965821  890570 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-656017 image ls --format json --alsologtostderr:
[{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"rep
oTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"74691
1"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"7b78b6a1273e8cc1ed5de9463fbfcf5f874288ae735433bb952acac47671fb56","repoDigests":["docker.io/library/ec388a8d99a9dfbdba22363b0a468c8b736d444cdb3d0727ed72c8e3a912fd91-tmp@sha256:9d7589ce77508ea7ed1ada93e40dcbc9febe37f90252aff8bb5f768b08844fcc"],"repoTags":[],"size":"1466132"},{"id":"5e07028687ec1bbcce6a26c33a3dd15d7572d7cab7815ff19ea7bd6054645b83","repoDigests":["localhost/my-image@sha256:0b4ebd2e8be7829298b4e3840d5d1c9fd3092bc680164f2d5dc3b52433d980ea"],"repoTags":["localhost/my-image:functional-656017"],"size":"1468743"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-cont
roller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06
ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a832
1e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-656017 image ls --format json --alsologtostderr:
I1026 14:34:47.592699  890498 out.go:360] Setting OutFile to fd 1 ...
I1026 14:34:47.592977  890498 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:47.592986  890498 out.go:374] Setting ErrFile to fd 2...
I1026 14:34:47.592990  890498 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:47.593222  890498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
I1026 14:34:47.593822  890498 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:47.593927  890498 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:47.594340  890498 cli_runner.go:164] Run: docker container inspect functional-656017 --format={{.State.Status}}
I1026 14:34:47.612465  890498 ssh_runner.go:195] Run: systemctl --version
I1026 14:34:47.612535  890498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656017
I1026 14:34:47.630792  890498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33546 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/functional-656017/id_rsa Username:docker}
I1026 14:34:47.731221  890498 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-656017 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-656017 image ls --format yaml --alsologtostderr:
I1026 14:34:45.043030  889921 out.go:360] Setting OutFile to fd 1 ...
I1026 14:34:45.043309  889921 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:45.043320  889921 out.go:374] Setting ErrFile to fd 2...
I1026 14:34:45.043324  889921 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:45.043530  889921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
I1026 14:34:45.044114  889921 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:45.044231  889921 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:45.044632  889921 cli_runner.go:164] Run: docker container inspect functional-656017 --format={{.State.Status}}
I1026 14:34:45.062514  889921 ssh_runner.go:195] Run: systemctl --version
I1026 14:34:45.062560  889921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656017
I1026 14:34:45.080349  889921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33546 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/functional-656017/id_rsa Username:docker}
I1026 14:34:45.181216  889921 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-656017 ssh pgrep buildkitd: exit status 1 (274.271011ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image build -t localhost/my-image:functional-656017 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-656017 image build -t localhost/my-image:functional-656017 testdata/build --alsologtostderr: (1.810791779s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-656017 image build -t localhost/my-image:functional-656017 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7b78b6a1273
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-656017
--> 5e07028687e
Successfully tagged localhost/my-image:functional-656017
5e07028687ec1bbcce6a26c33a3dd15d7572d7cab7815ff19ea7bd6054645b83
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-656017 image build -t localhost/my-image:functional-656017 testdata/build --alsologtostderr:
I1026 14:34:45.550148  890085 out.go:360] Setting OutFile to fd 1 ...
I1026 14:34:45.550399  890085 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:45.550407  890085 out.go:374] Setting ErrFile to fd 2...
I1026 14:34:45.550411  890085 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:34:45.550613  890085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
I1026 14:34:45.551358  890085 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:45.552020  890085 config.go:182] Loaded profile config "functional-656017": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:34:45.552476  890085 cli_runner.go:164] Run: docker container inspect functional-656017 --format={{.State.Status}}
I1026 14:34:45.570135  890085 ssh_runner.go:195] Run: systemctl --version
I1026 14:34:45.570206  890085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656017
I1026 14:34:45.587681  890085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33546 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/functional-656017/id_rsa Username:docker}
I1026 14:34:45.686798  890085 build_images.go:161] Building image from path: /tmp/build.3472540065.tar
I1026 14:34:45.686868  890085 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1026 14:34:45.695185  890085 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3472540065.tar
I1026 14:34:45.698878  890085 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3472540065.tar: stat -c "%s %y" /var/lib/minikube/build/build.3472540065.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3472540065.tar': No such file or directory
I1026 14:34:45.698905  890085 ssh_runner.go:362] scp /tmp/build.3472540065.tar --> /var/lib/minikube/build/build.3472540065.tar (3072 bytes)
I1026 14:34:45.716968  890085 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3472540065
I1026 14:34:45.724726  890085 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3472540065 -xf /var/lib/minikube/build/build.3472540065.tar
I1026 14:34:45.732926  890085 crio.go:315] Building image: /var/lib/minikube/build/build.3472540065
I1026 14:34:45.732999  890085 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-656017 /var/lib/minikube/build/build.3472540065 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1026 14:34:47.278251  890085 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-656017 /var/lib/minikube/build/build.3472540065 --cgroup-manager=cgroupfs: (1.54522183s)
I1026 14:34:47.278322  890085 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3472540065
I1026 14:34:47.286962  890085 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3472540065.tar
I1026 14:34:47.294726  890085 build_images.go:217] Built localhost/my-image:functional-656017 from /tmp/build.3472540065.tar
I1026 14:34:47.294770  890085 build_images.go:133] succeeded building to: functional-656017
I1026 14:34:47.294777  890085 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-656017
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image rm kicbase/echo-server:functional-656017 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 update-context --alsologtostderr -v=2
E1026 14:36:34.525346  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-656017 service list: (1.720728581s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-656017 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-656017 service list -o json: (1.700377885s)
functional_test.go:1504: Took "1.700486519s" to run "out/minikube-linux-amd64 -p functional-656017 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-656017
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-656017
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-656017
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (106.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m45.323205349s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (106.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- rollout status deployment/busybox
E1026 14:46:34.526945  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 kubectl -- rollout status deployment/busybox: (1.814772841s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-4krcj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-8tbdj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-zwh27 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-4krcj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-8tbdj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-zwh27 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-4krcj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-8tbdj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-zwh27 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-4krcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-4krcj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-8tbdj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-8tbdj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-zwh27 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 kubectl -- exec busybox-7b57f96db7-zwh27 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 node add --alsologtostderr -v 5: (24.218919335s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-068218 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp testdata/cp-test.txt ha-068218:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile615157719/001/cp-test_ha-068218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218:/home/docker/cp-test.txt ha-068218-m02:/home/docker/cp-test_ha-068218_ha-068218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m02 "sudo cat /home/docker/cp-test_ha-068218_ha-068218-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218:/home/docker/cp-test.txt ha-068218-m03:/home/docker/cp-test_ha-068218_ha-068218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m03 "sudo cat /home/docker/cp-test_ha-068218_ha-068218-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218:/home/docker/cp-test.txt ha-068218-m04:/home/docker/cp-test_ha-068218_ha-068218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m04 "sudo cat /home/docker/cp-test_ha-068218_ha-068218-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp testdata/cp-test.txt ha-068218-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile615157719/001/cp-test_ha-068218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m02:/home/docker/cp-test.txt ha-068218:/home/docker/cp-test_ha-068218-m02_ha-068218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218 "sudo cat /home/docker/cp-test_ha-068218-m02_ha-068218.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m02:/home/docker/cp-test.txt ha-068218-m03:/home/docker/cp-test_ha-068218-m02_ha-068218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m03 "sudo cat /home/docker/cp-test_ha-068218-m02_ha-068218-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m02:/home/docker/cp-test.txt ha-068218-m04:/home/docker/cp-test_ha-068218-m02_ha-068218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m04 "sudo cat /home/docker/cp-test_ha-068218-m02_ha-068218-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp testdata/cp-test.txt ha-068218-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile615157719/001/cp-test_ha-068218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m03:/home/docker/cp-test.txt ha-068218:/home/docker/cp-test_ha-068218-m03_ha-068218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218 "sudo cat /home/docker/cp-test_ha-068218-m03_ha-068218.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m03:/home/docker/cp-test.txt ha-068218-m02:/home/docker/cp-test_ha-068218-m03_ha-068218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m02 "sudo cat /home/docker/cp-test_ha-068218-m03_ha-068218-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m03:/home/docker/cp-test.txt ha-068218-m04:/home/docker/cp-test_ha-068218-m03_ha-068218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m04 "sudo cat /home/docker/cp-test_ha-068218-m03_ha-068218-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp testdata/cp-test.txt ha-068218-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile615157719/001/cp-test_ha-068218-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m04:/home/docker/cp-test.txt ha-068218:/home/docker/cp-test_ha-068218-m04_ha-068218.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218 "sudo cat /home/docker/cp-test_ha-068218-m04_ha-068218.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m04:/home/docker/cp-test.txt ha-068218-m02:/home/docker/cp-test_ha-068218-m04_ha-068218-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m02 "sudo cat /home/docker/cp-test_ha-068218-m04_ha-068218-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 cp ha-068218-m04:/home/docker/cp-test.txt ha-068218-m03:/home/docker/cp-test_ha-068218-m04_ha-068218-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 ssh -n ha-068218-m03 "sudo cat /home/docker/cp-test_ha-068218-m04_ha-068218-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 node stop m02 --alsologtostderr -v 5: (18.427658173s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5: exit status 7 (714.753038ms)

                                                
                                                
-- stdout --
	ha-068218
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-068218-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-068218-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-068218-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:47:41.874191  915353 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:47:41.874302  915353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:47:41.874309  915353 out.go:374] Setting ErrFile to fd 2...
	I1026 14:47:41.874316  915353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:47:41.874499  915353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:47:41.874667  915353 out.go:368] Setting JSON to false
	I1026 14:47:41.874707  915353 mustload.go:65] Loading cluster: ha-068218
	I1026 14:47:41.874765  915353 notify.go:220] Checking for updates...
	I1026 14:47:41.875293  915353 config.go:182] Loaded profile config "ha-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:47:41.875317  915353 status.go:174] checking status of ha-068218 ...
	I1026 14:47:41.875923  915353 cli_runner.go:164] Run: docker container inspect ha-068218 --format={{.State.Status}}
	I1026 14:47:41.895862  915353 status.go:371] ha-068218 host status = "Running" (err=<nil>)
	I1026 14:47:41.895907  915353 host.go:66] Checking if "ha-068218" exists ...
	I1026 14:47:41.896254  915353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-068218
	I1026 14:47:41.914687  915353 host.go:66] Checking if "ha-068218" exists ...
	I1026 14:47:41.915011  915353 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:47:41.915088  915353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-068218
	I1026 14:47:41.934087  915353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33551 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/ha-068218/id_rsa Username:docker}
	I1026 14:47:42.034038  915353 ssh_runner.go:195] Run: systemctl --version
	I1026 14:47:42.040431  915353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:47:42.053100  915353 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 14:47:42.110920  915353 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-26 14:47:42.101443246 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 14:47:42.111548  915353 kubeconfig.go:125] found "ha-068218" server: "https://192.168.49.254:8443"
	I1026 14:47:42.111583  915353 api_server.go:166] Checking apiserver status ...
	I1026 14:47:42.111628  915353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:47:42.123654  915353 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1236/cgroup
	W1026 14:47:42.132720  915353 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1236/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 14:47:42.132768  915353 ssh_runner.go:195] Run: ls
	I1026 14:47:42.136560  915353 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 14:47:42.140731  915353 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 14:47:42.140756  915353 status.go:463] ha-068218 apiserver status = Running (err=<nil>)
	I1026 14:47:42.140768  915353 status.go:176] ha-068218 status: &{Name:ha-068218 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:47:42.140785  915353 status.go:174] checking status of ha-068218-m02 ...
	I1026 14:47:42.141022  915353 cli_runner.go:164] Run: docker container inspect ha-068218-m02 --format={{.State.Status}}
	I1026 14:47:42.159526  915353 status.go:371] ha-068218-m02 host status = "Stopped" (err=<nil>)
	I1026 14:47:42.159546  915353 status.go:384] host is not running, skipping remaining checks
	I1026 14:47:42.159553  915353 status.go:176] ha-068218-m02 status: &{Name:ha-068218-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:47:42.159574  915353 status.go:174] checking status of ha-068218-m03 ...
	I1026 14:47:42.159846  915353 cli_runner.go:164] Run: docker container inspect ha-068218-m03 --format={{.State.Status}}
	I1026 14:47:42.178659  915353 status.go:371] ha-068218-m03 host status = "Running" (err=<nil>)
	I1026 14:47:42.178684  915353 host.go:66] Checking if "ha-068218-m03" exists ...
	I1026 14:47:42.179000  915353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-068218-m03
	I1026 14:47:42.197559  915353 host.go:66] Checking if "ha-068218-m03" exists ...
	I1026 14:47:42.197934  915353 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:47:42.198000  915353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-068218-m03
	I1026 14:47:42.216961  915353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33561 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/ha-068218-m03/id_rsa Username:docker}
	I1026 14:47:42.314799  915353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:47:42.328070  915353 kubeconfig.go:125] found "ha-068218" server: "https://192.168.49.254:8443"
	I1026 14:47:42.328100  915353 api_server.go:166] Checking apiserver status ...
	I1026 14:47:42.328135  915353 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:47:42.339984  915353 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup
	W1026 14:47:42.349074  915353 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 14:47:42.349131  915353 ssh_runner.go:195] Run: ls
	I1026 14:47:42.353240  915353 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 14:47:42.357278  915353 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 14:47:42.357302  915353 status.go:463] ha-068218-m03 apiserver status = Running (err=<nil>)
	I1026 14:47:42.357312  915353 status.go:176] ha-068218-m03 status: &{Name:ha-068218-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:47:42.357331  915353 status.go:174] checking status of ha-068218-m04 ...
	I1026 14:47:42.357619  915353 cli_runner.go:164] Run: docker container inspect ha-068218-m04 --format={{.State.Status}}
	I1026 14:47:42.376762  915353 status.go:371] ha-068218-m04 host status = "Running" (err=<nil>)
	I1026 14:47:42.376788  915353 host.go:66] Checking if "ha-068218-m04" exists ...
	I1026 14:47:42.377040  915353 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-068218-m04
	I1026 14:47:42.395399  915353 host.go:66] Checking if "ha-068218-m04" exists ...
	I1026 14:47:42.395660  915353 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:47:42.395706  915353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-068218-m04
	I1026 14:47:42.413974  915353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33566 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/ha-068218-m04/id_rsa Username:docker}
	I1026 14:47:42.512413  915353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:47:42.525032  915353 status.go:176] ha-068218-m04 status: &{Name:ha-068218-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 node start m02 --alsologtostderr -v 5: (8.163672121s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (193.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 stop --alsologtostderr -v 5
E1026 14:48:25.311835  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:25.321252  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:25.332737  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:25.354383  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:25.395871  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:25.477571  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:25.639434  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:25.961383  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:26.602779  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:27.884527  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:30.447576  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:48:35.569431  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 stop --alsologtostderr -v 5: (49.2482458s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 start --wait true --alsologtostderr -v 5
E1026 14:48:45.811213  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:49:06.292888  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:49:47.255764  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 start --wait true --alsologtostderr -v 5: (2m23.977483926s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (193.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 node delete m03 --alsologtostderr -v 5
E1026 14:51:09.177270  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 node delete m03 --alsologtostderr -v 5: (9.81314525s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (46.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 stop --alsologtostderr -v 5
E1026 14:51:34.527124  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 stop --alsologtostderr -v 5: (46.633337391s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5: exit status 7 (118.291865ms)

                                                
                                                
-- stdout --
	ha-068218
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-068218-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-068218-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:52:04.813536  929831 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:52:04.813653  929831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:52:04.813662  929831 out.go:374] Setting ErrFile to fd 2...
	I1026 14:52:04.813666  929831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:52:04.814436  929831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 14:52:04.814963  929831 out.go:368] Setting JSON to false
	I1026 14:52:04.815008  929831 mustload.go:65] Loading cluster: ha-068218
	I1026 14:52:04.815131  929831 notify.go:220] Checking for updates...
	I1026 14:52:04.815559  929831 config.go:182] Loaded profile config "ha-068218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:52:04.815581  929831 status.go:174] checking status of ha-068218 ...
	I1026 14:52:04.816107  929831 cli_runner.go:164] Run: docker container inspect ha-068218 --format={{.State.Status}}
	I1026 14:52:04.837002  929831 status.go:371] ha-068218 host status = "Stopped" (err=<nil>)
	I1026 14:52:04.837028  929831 status.go:384] host is not running, skipping remaining checks
	I1026 14:52:04.837034  929831 status.go:176] ha-068218 status: &{Name:ha-068218 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:52:04.837057  929831 status.go:174] checking status of ha-068218-m02 ...
	I1026 14:52:04.837329  929831 cli_runner.go:164] Run: docker container inspect ha-068218-m02 --format={{.State.Status}}
	I1026 14:52:04.854326  929831 status.go:371] ha-068218-m02 host status = "Stopped" (err=<nil>)
	I1026 14:52:04.854349  929831 status.go:384] host is not running, skipping remaining checks
	I1026 14:52:04.854357  929831 status.go:176] ha-068218-m02 status: &{Name:ha-068218-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:52:04.854382  929831 status.go:174] checking status of ha-068218-m04 ...
	I1026 14:52:04.854691  929831 cli_runner.go:164] Run: docker container inspect ha-068218-m04 --format={{.State.Status}}
	I1026 14:52:04.871462  929831 status.go:371] ha-068218-m04 host status = "Stopped" (err=<nil>)
	I1026 14:52:04.871487  929831 status.go:384] host is not running, skipping remaining checks
	I1026 14:52:04.871496  929831 status.go:176] ha-068218-m04 status: &{Name:ha-068218-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (46.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (54.122065921s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 node add --control-plane --alsologtostderr -v 5
E1026 14:53:25.311881  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-068218 node add --control-plane --alsologtostderr -v 5: (47.143435015s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-068218 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-439592 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-439592 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.783078385s)
--- PASS: TestJSONOutput/start/Command (38.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-439592 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-439592 --output=json --user=testUser: (6.068932804s)
--- PASS: TestJSONOutput/stop/Command (6.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-368433 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-368433 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.265412ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d998d924-7ce3-4d1c-8ccb-bc5cd1f99023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-368433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"03f1332b-8add-441e-a211-9cade48decc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"b8a6daad-21e1-4fa6-bc89-c6c4dd013fa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5149abd9-dbd3-4fc4-bd84-80d2b9c5ab97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig"}}
	{"specversion":"1.0","id":"85201680-11ce-470e-bc36-e80897fce731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube"}}
	{"specversion":"1.0","id":"9f472963-28c4-41c3-9718-918c73f208df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bca31170-09e1-4e5f-bc38-405f02a796bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e4da1580-ab2b-4bdb-a8d8-bc62e55d1cd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-368433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-368433
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-817337 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-817337 --network=: (25.027326872s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-817337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-817337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-817337: (2.152611029s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-473021 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-473021 --network=bridge: (21.955617697s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-473021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-473021
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-473021: (2.020208172s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.00s)

                                                
                                    
x
+
TestKicExistingNetwork (24.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1026 14:55:41.324181  845095 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1026 14:55:41.341829  845095 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1026 14:55:41.341979  845095 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1026 14:55:41.342022  845095 cli_runner.go:164] Run: docker network inspect existing-network
W1026 14:55:41.359252  845095 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1026 14:55:41.359283  845095 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1026 14:55:41.359298  845095 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1026 14:55:41.359424  845095 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1026 14:55:41.377208  845095 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fa58be42f477 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:e4:ad:45:54:67} reservation:<nil>}
I1026 14:55:41.377627  845095 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d83040}
I1026 14:55:41.377665  845095 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1026 14:55:41.377714  845095 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1026 14:55:41.435399  845095 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-429067 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-429067 --network=existing-network: (22.68576399s)
helpers_test.go:175: Cleaning up "existing-network-429067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-429067
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-429067: (2.066331576s)
I1026 14:56:06.205135  845095 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.90s)

                                                
                                    
x
+
TestKicCustomSubnet (26.78s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-118015 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-118015 --subnet=192.168.60.0/24: (24.574806965s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-118015 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-118015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-118015
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-118015: (2.188979687s)
--- PASS: TestKicCustomSubnet (26.78s)

                                                
                                    
x
+
TestKicStaticIP (24.76s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-977021 --static-ip=192.168.200.200
E1026 14:56:34.525404  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-977021 --static-ip=192.168.200.200: (22.419529553s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-977021 ip
helpers_test.go:175: Cleaning up "static-ip-977021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-977021
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-977021: (2.178833783s)
--- PASS: TestKicStaticIP (24.76s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-389141 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-389141 --driver=docker  --container-runtime=crio: (23.338732894s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-391218 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-391218 --driver=docker  --container-runtime=crio: (20.825113785s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-389141
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-391218
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-391218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-391218
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-391218: (2.398896552s)
helpers_test.go:175: Cleaning up "first-389141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-389141
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-389141: (2.376402406s)
--- PASS: TestMinikubeProfile (50.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-343000 --memory=3072 --mount-string /tmp/TestMountStartserial4099908932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-343000 --memory=3072 --mount-string /tmp/TestMountStartserial4099908932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.82175793s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-343000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-357105 --memory=3072 --mount-string /tmp/TestMountStartserial4099908932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-357105 --memory=3072 --mount-string /tmp/TestMountStartserial4099908932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.075649654s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357105 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-343000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-343000 --alsologtostderr -v=5: (1.72149509s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357105 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-357105
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-357105: (1.267547916s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-357105
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-357105: (6.201002375s)
--- PASS: TestMountStart/serial/RestartStopped (7.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357105 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (90.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-260844 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1026 14:58:25.311404  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:59:37.594932  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-260844 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m30.325893824s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (90.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-260844 -- rollout status deployment/busybox: (1.803661622s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-khqvx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-lw9xk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-khqvx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-lw9xk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-khqvx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-lw9xk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.29s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-khqvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-khqvx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-lw9xk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-260844 -- exec busybox-7b57f96db7-lw9xk -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-260844 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-260844 -v=5 --alsologtostderr: (27.148375687s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.82s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-260844 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp testdata/cp-test.txt multinode-260844:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp multinode-260844:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile810509717/001/cp-test_multinode-260844.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp multinode-260844:/home/docker/cp-test.txt multinode-260844-m02:/home/docker/cp-test_multinode-260844_multinode-260844-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m02 "sudo cat /home/docker/cp-test_multinode-260844_multinode-260844-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp multinode-260844:/home/docker/cp-test.txt multinode-260844-m03:/home/docker/cp-test_multinode-260844_multinode-260844-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m03 "sudo cat /home/docker/cp-test_multinode-260844_multinode-260844-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp testdata/cp-test.txt multinode-260844-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp multinode-260844-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile810509717/001/cp-test_multinode-260844-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp multinode-260844-m02:/home/docker/cp-test.txt multinode-260844:/home/docker/cp-test_multinode-260844-m02_multinode-260844.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844 "sudo cat /home/docker/cp-test_multinode-260844-m02_multinode-260844.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp multinode-260844-m02:/home/docker/cp-test.txt multinode-260844-m03:/home/docker/cp-test_multinode-260844-m02_multinode-260844-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m03 "sudo cat /home/docker/cp-test_multinode-260844-m02_multinode-260844-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp testdata/cp-test.txt multinode-260844-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp multinode-260844-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile810509717/001/cp-test_multinode-260844-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp multinode-260844-m03:/home/docker/cp-test.txt multinode-260844:/home/docker/cp-test_multinode-260844-m03_multinode-260844.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844 "sudo cat /home/docker/cp-test_multinode-260844-m03_multinode-260844.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 cp multinode-260844-m03:/home/docker/cp-test.txt multinode-260844-m02:/home/docker/cp-test_multinode-260844-m03_multinode-260844-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 ssh -n multinode-260844-m02 "sudo cat /home/docker/cp-test_multinode-260844-m03_multinode-260844-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-260844 node stop m03: (1.269703275s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-260844 status: exit status 7 (520.18969ms)

                                                
                                                
-- stdout --
	multinode-260844
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-260844-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-260844-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-260844 status --alsologtostderr: exit status 7 (516.92346ms)

                                                
                                                
-- stdout --
	multinode-260844
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-260844-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-260844-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:00:33.712662  989569 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:00:33.712964  989569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:00:33.712975  989569 out.go:374] Setting ErrFile to fd 2...
	I1026 15:00:33.712979  989569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:00:33.713206  989569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:00:33.713396  989569 out.go:368] Setting JSON to false
	I1026 15:00:33.713433  989569 mustload.go:65] Loading cluster: multinode-260844
	I1026 15:00:33.713491  989569 notify.go:220] Checking for updates...
	I1026 15:00:33.713983  989569 config.go:182] Loaded profile config "multinode-260844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:00:33.714005  989569 status.go:174] checking status of multinode-260844 ...
	I1026 15:00:33.714635  989569 cli_runner.go:164] Run: docker container inspect multinode-260844 --format={{.State.Status}}
	I1026 15:00:33.734254  989569 status.go:371] multinode-260844 host status = "Running" (err=<nil>)
	I1026 15:00:33.734301  989569 host.go:66] Checking if "multinode-260844" exists ...
	I1026 15:00:33.734611  989569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-260844
	I1026 15:00:33.752767  989569 host.go:66] Checking if "multinode-260844" exists ...
	I1026 15:00:33.753029  989569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:00:33.753071  989569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-260844
	I1026 15:00:33.771784  989569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33672 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/multinode-260844/id_rsa Username:docker}
	I1026 15:00:33.871035  989569 ssh_runner.go:195] Run: systemctl --version
	I1026 15:00:33.877792  989569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:00:33.890672  989569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:00:33.950458  989569 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-26 15:00:33.94060744 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:00:33.951045  989569 kubeconfig.go:125] found "multinode-260844" server: "https://192.168.67.2:8443"
	I1026 15:00:33.951078  989569 api_server.go:166] Checking apiserver status ...
	I1026 15:00:33.951125  989569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:00:33.963247  989569 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1213/cgroup
	W1026 15:00:33.972090  989569 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1213/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:00:33.972143  989569 ssh_runner.go:195] Run: ls
	I1026 15:00:33.975955  989569 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1026 15:00:33.980186  989569 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1026 15:00:33.980212  989569 status.go:463] multinode-260844 apiserver status = Running (err=<nil>)
	I1026 15:00:33.980226  989569 status.go:176] multinode-260844 status: &{Name:multinode-260844 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 15:00:33.980245  989569 status.go:174] checking status of multinode-260844-m02 ...
	I1026 15:00:33.980500  989569 cli_runner.go:164] Run: docker container inspect multinode-260844-m02 --format={{.State.Status}}
	I1026 15:00:33.998635  989569 status.go:371] multinode-260844-m02 host status = "Running" (err=<nil>)
	I1026 15:00:33.998660  989569 host.go:66] Checking if "multinode-260844-m02" exists ...
	I1026 15:00:33.998915  989569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-260844-m02
	I1026 15:00:34.016202  989569 host.go:66] Checking if "multinode-260844-m02" exists ...
	I1026 15:00:34.016484  989569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 15:00:34.016528  989569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-260844-m02
	I1026 15:00:34.034869  989569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33677 SSHKeyPath:/home/jenkins/minikube-integration/21664-841519/.minikube/machines/multinode-260844-m02/id_rsa Username:docker}
	I1026 15:00:34.134021  989569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:00:34.146901  989569 status.go:176] multinode-260844-m02 status: &{Name:multinode-260844-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1026 15:00:34.146950  989569 status.go:174] checking status of multinode-260844-m03 ...
	I1026 15:00:34.147287  989569 cli_runner.go:164] Run: docker container inspect multinode-260844-m03 --format={{.State.Status}}
	I1026 15:00:34.165992  989569 status.go:371] multinode-260844-m03 host status = "Stopped" (err=<nil>)
	I1026 15:00:34.166029  989569 status.go:384] host is not running, skipping remaining checks
	I1026 15:00:34.166037  989569 status.go:176] multinode-260844-m03 status: &{Name:multinode-260844-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-260844 node start m03 -v=5 --alsologtostderr: (6.581588519s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-260844
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-260844
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-260844: (29.771160727s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-260844 --wait=true -v=5 --alsologtostderr
E1026 15:01:34.525929  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-260844 --wait=true -v=5 --alsologtostderr: (49.27078586s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-260844
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-260844 node delete m03: (4.708001013s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-260844 stop: (30.205220828s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-260844 status: exit status 7 (106.754995ms)

                                                
                                                
-- stdout --
	multinode-260844
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-260844-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-260844 status --alsologtostderr: exit status 7 (102.700081ms)

                                                
                                                
-- stdout --
	multinode-260844
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-260844-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:02:36.371576  999331 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:02:36.371851  999331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:02:36.371863  999331 out.go:374] Setting ErrFile to fd 2...
	I1026 15:02:36.371867  999331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:02:36.372071  999331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:02:36.372265  999331 out.go:368] Setting JSON to false
	I1026 15:02:36.372302  999331 mustload.go:65] Loading cluster: multinode-260844
	I1026 15:02:36.372357  999331 notify.go:220] Checking for updates...
	I1026 15:02:36.372843  999331 config.go:182] Loaded profile config "multinode-260844": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:02:36.372865  999331 status.go:174] checking status of multinode-260844 ...
	I1026 15:02:36.373443  999331 cli_runner.go:164] Run: docker container inspect multinode-260844 --format={{.State.Status}}
	I1026 15:02:36.392664  999331 status.go:371] multinode-260844 host status = "Stopped" (err=<nil>)
	I1026 15:02:36.392694  999331 status.go:384] host is not running, skipping remaining checks
	I1026 15:02:36.392701  999331 status.go:176] multinode-260844 status: &{Name:multinode-260844 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 15:02:36.392732  999331 status.go:174] checking status of multinode-260844-m02 ...
	I1026 15:02:36.392992  999331 cli_runner.go:164] Run: docker container inspect multinode-260844-m02 --format={{.State.Status}}
	I1026 15:02:36.411590  999331 status.go:371] multinode-260844-m02 host status = "Stopped" (err=<nil>)
	I1026 15:02:36.411626  999331 status.go:384] host is not running, skipping remaining checks
	I1026 15:02:36.411634  999331 status.go:176] multinode-260844-m02 status: &{Name:multinode-260844-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.42s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (29.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-260844 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-260844 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (29.200528974s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-260844 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (29.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-260844
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-260844-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-260844-m02 --driver=docker  --container-runtime=crio: exit status 14 (84.652941ms)

                                                
                                                
-- stdout --
	* [multinode-260844-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-260844-m02' is duplicated with machine name 'multinode-260844-m02' in profile 'multinode-260844'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-260844-m03 --driver=docker  --container-runtime=crio
E1026 15:03:25.311636  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-260844-m03 --driver=docker  --container-runtime=crio: (22.048533851s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-260844
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-260844: exit status 80 (302.510766ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-260844 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-260844-m03 already exists in multinode-260844-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-260844-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-260844-m03: (2.457295604s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.96s)

                                                
                                    
x
+
TestPreload (106.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-633732 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-633732 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (46.567059644s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-633732 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-633732
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-633732: (5.970766864s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-633732 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1026 15:04:48.381722  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-633732 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (50.518766256s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-633732 image list
helpers_test.go:175: Cleaning up "test-preload-633732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-633732
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-633732: (2.424601466s)
--- PASS: TestPreload (106.65s)

                                                
                                    
x
+
TestScheduledStopUnix (97.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-269512 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-269512 --memory=3072 --driver=docker  --container-runtime=crio: (20.724803823s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-269512 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-269512 -n scheduled-stop-269512
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-269512 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1026 15:05:43.262976  845095 retry.go:31] will retry after 101.404µs: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.264148  845095 retry.go:31] will retry after 141.981µs: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.265322  845095 retry.go:31] will retry after 203.105µs: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.266462  845095 retry.go:31] will retry after 179.3µs: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.267594  845095 retry.go:31] will retry after 359.871µs: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.268733  845095 retry.go:31] will retry after 622.84µs: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.269861  845095 retry.go:31] will retry after 1.22795ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.272077  845095 retry.go:31] will retry after 1.873103ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.274280  845095 retry.go:31] will retry after 1.722084ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.276504  845095 retry.go:31] will retry after 2.592905ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.279769  845095 retry.go:31] will retry after 7.475449ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.288152  845095 retry.go:31] will retry after 10.702164ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.299421  845095 retry.go:31] will retry after 15.227172ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.315672  845095 retry.go:31] will retry after 22.349395ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.338934  845095 retry.go:31] will retry after 22.557249ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
I1026 15:05:43.362221  845095 retry.go:31] will retry after 62.332845ms: open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/scheduled-stop-269512/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-269512 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-269512 -n scheduled-stop-269512
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-269512
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-269512 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1026 15:06:34.533267  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-269512
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-269512: exit status 7 (88.035004ms)

                                                
                                                
-- stdout --
	scheduled-stop-269512
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-269512 -n scheduled-stop-269512
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-269512 -n scheduled-stop-269512: exit status 7 (87.005267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-269512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-269512
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-269512: (5.531769049s)
--- PASS: TestScheduledStopUnix (97.88s)

                                                
                                    
x
+
TestInsufficientStorage (9.86s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-263685 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-263685 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.28341293s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba44006b-77a7-4643-8139-3534b71aaed6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-263685] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a245722b-08e5-454b-b7c1-6261d3fe4dac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"10f23199-aca2-4cf3-9b03-b61538598ed0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e910da1d-715f-4733-bfc1-35fdf53d1671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig"}}
	{"specversion":"1.0","id":"d353350c-6796-472b-8380-c42d0db00009","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube"}}
	{"specversion":"1.0","id":"701e0c06-c46f-41c8-8940-5923cf82cd79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f14a3592-78ba-4ca0-ba3f-c39d935548f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c46975fa-aca1-4212-9af5-3b571f13debc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7071953c-d50d-4302-8355-74511d454d2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"171a72de-7c59-4038-b0ef-0b98004c2eb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c02eaf77-40a8-47e5-9889-3935828abd37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"eb4f288b-8d7f-4e69-b437-0b8348546a5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-263685\" primary control-plane node in \"insufficient-storage-263685\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5aa8a45e-33d6-4925-9eac-e417c03e9331","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b3abaf7-b95e-4cf0-a674-6ae324257c98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9c3fa7b-ba6f-447f-ad7e-5f0ccc219c05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-263685 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-263685 --output=json --layout=cluster: exit status 7 (306.614838ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-263685","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-263685","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 15:07:07.515928 1019634 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-263685" does not appear in /home/jenkins/minikube-integration/21664-841519/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-263685 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-263685 --output=json --layout=cluster: exit status 7 (299.653866ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-263685","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-263685","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 15:07:07.816953 1019746 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-263685" does not appear in /home/jenkins/minikube-integration/21664-841519/kubeconfig
	E1026 15:07:07.827665 1019746 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/insufficient-storage-263685/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-263685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-263685
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-263685: (1.968040936s)
--- PASS: TestInsufficientStorage (9.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.69s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1145414482 start -p running-upgrade-917646 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1026 15:08:25.311479  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1145414482 start -p running-upgrade-917646 --memory=3072 --vm-driver=docker  --container-runtime=crio: (24.543098839s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-917646 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-917646 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.114425751s)
helpers_test.go:175: Cleaning up "running-upgrade-917646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-917646
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-917646: (2.556185417s)
--- PASS: TestRunningBinaryUpgrade (53.69s)

                                                
                                    
x
+
TestKubernetesUpgrade (312.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.552460422s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-176599
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-176599: (2.019590232s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-176599 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-176599 status --format={{.Host}}: exit status 7 (107.699222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.873586561s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-176599 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (85.441208ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-176599] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-176599
	    minikube start -p kubernetes-upgrade-176599 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1765992 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-176599 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-176599 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.632973314s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-176599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-176599
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-176599: (2.530917526s)
--- PASS: TestKubernetesUpgrade (312.86s)

                                                
                                    
x
+
TestMissingContainerUpgrade (93.16s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2219417753 start -p missing-upgrade-374022 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2219417753 start -p missing-upgrade-374022 --memory=3072 --driver=docker  --container-runtime=crio: (41.47969214s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-374022
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-374022: (10.473097884s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-374022
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-374022 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-374022 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.075338418s)
helpers_test.go:175: Cleaning up "missing-upgrade-374022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-374022
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-374022: (2.507726195s)
--- PASS: TestMissingContainerUpgrade (93.16s)

                                                
                                    
x
+
TestPause/serial/Start (85.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-212674 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-212674 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m25.73881466s)
--- PASS: TestPause/serial/Start (85.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-212674 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-212674 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.496440991s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (44.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2302758742 start -p stopped-upgrade-886432 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2302758742 start -p stopped-upgrade-886432 --memory=3072 --vm-driver=docker  --container-runtime=crio: (24.358030071s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2302758742 -p stopped-upgrade-886432 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2302758742 -p stopped-upgrade-886432 stop: (4.663978226s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-886432 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-886432 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.675727396s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (44.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917490 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-917490 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (90.813285ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-917490] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917490 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-917490 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.262669192s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-917490 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-498531 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-498531 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (203.22037ms)

                                                
                                                
-- stdout --
	* [false-498531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:09:21.867367 1054516 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:09:21.867646 1054516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:09:21.867656 1054516 out.go:374] Setting ErrFile to fd 2...
	I1026 15:09:21.867660 1054516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:09:21.867852 1054516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-841519/.minikube/bin
	I1026 15:09:21.868356 1054516 out.go:368] Setting JSON to false
	I1026 15:09:21.869547 1054516 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10310,"bootTime":1761481052,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:09:21.869658 1054516 start.go:141] virtualization: kvm guest
	I1026 15:09:21.872326 1054516 out.go:179] * [false-498531] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:09:21.874115 1054516 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:09:21.874142 1054516 notify.go:220] Checking for updates...
	I1026 15:09:21.876752 1054516 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:09:21.878582 1054516 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-841519/kubeconfig
	I1026 15:09:21.879745 1054516 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-841519/.minikube
	I1026 15:09:21.880852 1054516 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:09:21.882043 1054516 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:09:21.883776 1054516 config.go:182] Loaded profile config "NoKubernetes-917490": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:09:21.883877 1054516 config.go:182] Loaded profile config "kubernetes-upgrade-176599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:09:21.883957 1054516 config.go:182] Loaded profile config "stopped-upgrade-886432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 15:09:21.884048 1054516 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:09:21.915476 1054516 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 15:09:21.915660 1054516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 15:09:21.992213 1054516 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 15:09:21.981131497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 15:09:21.992368 1054516 docker.go:318] overlay module found
	I1026 15:09:21.994618 1054516 out.go:179] * Using the docker driver based on user configuration
	I1026 15:09:21.995981 1054516 start.go:305] selected driver: docker
	I1026 15:09:21.995996 1054516 start.go:925] validating driver "docker" against <nil>
	I1026 15:09:21.996008 1054516 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:09:21.997817 1054516 out.go:203] 
	W1026 15:09:21.999014 1054516 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1026 15:09:22.000133 1054516 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-498531 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-498531" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:08:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-176599
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:09:20 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-886432
contexts:
- context:
cluster: kubernetes-upgrade-176599
user: kubernetes-upgrade-176599
name: kubernetes-upgrade-176599
- context:
cluster: stopped-upgrade-886432
user: stopped-upgrade-886432
name: stopped-upgrade-886432
current-context: stopped-upgrade-886432
kind: Config
users:
- name: kubernetes-upgrade-176599
user:
client-certificate: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/kubernetes-upgrade-176599/client.crt
client-key: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/kubernetes-upgrade-176599/client.key
- name: stopped-upgrade-886432
user:
client-certificate: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/stopped-upgrade-886432/client.crt
client-key: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/stopped-upgrade-886432/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-498531

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-498531"

                                                
                                                
----------------------- debugLogs end: false-498531 [took: 4.081652137s] --------------------------------
helpers_test.go:175: Cleaning up "false-498531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-498531
--- PASS: TestNetworkPlugins/group/false (4.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-886432
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-886432: (1.113579498s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-917490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.192491572s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-917490 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-917490 status -o json: exit status 2 (335.646044ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-917490","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-917490
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-917490: (2.15771753s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-917490 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.855294998s)
--- PASS: TestNoKubernetes/serial/Start (5.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-917490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-917490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (326.355133ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (13.672398172s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (14.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-917490
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-917490: (1.312770143s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-917490 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-917490 --driver=docker  --container-runtime=crio: (6.564989558s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-917490 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-917490 "sudo systemctl is-active --quiet service kubelet": exit status 1 (315.218567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.696823106s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.090573713s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-330914 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fe9e7662-687b-457e-a57c-49441e024bbe] Pending
helpers_test.go:352: "busybox" [fe9e7662-687b-457e-a57c-49441e024bbe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fe9e7662-687b-457e-a57c-49441e024bbe] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003098967s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-330914 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-475081 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fa6c47a1-6c0a-41c3-a288-0ec79f76a4ba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fa6c47a1-6c0a-41c3-a288-0ec79f76a4ba] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003838331s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-475081 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-330914 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-330914 --alsologtostderr -v=3: (16.161101286s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-475081 --alsologtostderr -v=3
E1026 15:11:34.525962  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-475081 --alsologtostderr -v=3: (16.259528376s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330914 -n old-k8s-version-330914
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330914 -n old-k8s-version-330914: exit status 7 (87.940236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-330914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-330914 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.140989455s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-330914 -n old-k8s-version-330914
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475081 -n no-preload-475081
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475081 -n no-preload-475081: exit status 7 (92.003193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-475081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-475081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.392590204s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-475081 -n no-preload-475081
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.32938312s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bpdjl" [662c14a7-1a94-4d0c-b7e0-9c2d8eef8724] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003715404s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-bpdjl" [662c14a7-1a94-4d0c-b7e0-9c2d8eef8724] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00491351s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-330914 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-swr7t" [595ffcd6-6b3a-4c7d-9837-33ebbeb02505] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003370377s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-330914 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-swr7t" [595ffcd6-6b3a-4c7d-9837-33ebbeb02505] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004398597s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-475081 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-475081 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.136519435s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (28.657891208s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-535130 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3a83ca98-1247-4189-b60f-6902a250ac9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3a83ca98-1247-4189-b60f-6902a250ac9c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003670034s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-535130 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.059174465s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-535130 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-535130 --alsologtostderr -v=3: (16.386516232s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-450976 --alsologtostderr -v=3
E1026 15:13:25.311792  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/functional-656017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-450976 --alsologtostderr -v=3: (7.998731817s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535130 -n embed-certs-535130
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535130 -n embed-certs-535130: exit status 7 (88.230652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-535130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-535130 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.995348205s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-535130 -n embed-certs-535130
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-450976 -n newest-cni-450976
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-450976 -n newest-cni-450976: exit status 7 (83.649111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-450976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-450976 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (11.342438631s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-450976 -n newest-cni-450976
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-790012 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e7ac2f4d-99cd-4d92-9325-8c4c3b4aeac4] Pending
helpers_test.go:352: "busybox" [e7ac2f4d-99cd-4d92-9325-8c4c3b4aeac4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e7ac2f4d-99cd-4d92-9325-8c4c3b4aeac4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004414862s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-790012 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-790012 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-790012 --alsologtostderr -v=3: (17.177611668s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-450976 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-498531 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-498531 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-46wxg" [2fcd707d-7808-4785-9b2d-8075c0a050fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-46wxg" [2fcd707d-7808-4785-9b2d-8075c0a050fb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00426846s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.253263241s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-498531 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012: exit status 7 (90.473783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-790012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-790012 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.377431634s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-790012 -n default-k8s-diff-port-790012
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.556324521s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8p6g2" [0a6afa02-36ae-4637-8893-3f91d7a0fa0e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003455908s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8p6g2" [0a6afa02-36ae-4637-8893-3f91d7a0fa0e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003942584s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-535130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6d577" [b1fd8564-b7cf-40f1-90f7-17a0ea8fd227] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004536167s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-535130 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-498531 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-498531 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lwzrf" [e9f3ddde-64e5-4af3-ad9a-e6a24b931f99] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lwzrf" [e9f3ddde-64e5-4af3-ad9a-e6a24b931f99] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.072724178s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.853956762s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-498531 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pj966" [3c881e80-fe95-4d71-aff2-be956290436b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003430449s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pj966" [3c881e80-fe95-4d71-aff2-be956290436b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004279921s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-790012 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-790012 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-lhs66" [c64a8af7-63d1-46b9-9ba7-660c207aa610] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005361391s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m10.390350869s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.156279611s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-498531 "pgrep -a kubelet"
I1026 15:15:18.939644  845095 config.go:182] Loaded profile config "calico-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-498531 replace --force -f testdata/netcat-deployment.yaml
I1026 15:15:19.570817  845095 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1026 15:15:19.830634  845095 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4j52s" [980c3be4-dbb1-41cd-b77d-0608307043f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4j52s" [980c3be4-dbb1-41cd-b77d-0608307043f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004621263s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-498531 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-498531 "pgrep -a kubelet"
I1026 15:15:41.547246  845095 config.go:182] Loaded profile config "custom-flannel-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-498531 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xmgzr" [b65b7ebb-80e6-4ef7-ac80-ee0e7ffe00c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xmgzr" [b65b7ebb-80e6-4ef7-ac80-ee0e7ffe00c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004262007s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-498531 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-498531 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m6.5957451s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-qr9mw" [c21f27e7-6c46-4d4f-82c4-e360beea9dbc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003225119s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-498531 "pgrep -a kubelet"
E1026 15:16:14.514956  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1026 15:16:14.632669  845095 config.go:182] Loaded profile config "flannel-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-498531 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j4w97" [402e78a2-661a-4ebb-87c2-44ee291f709c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 15:16:17.076977  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:17.158537  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:17.164959  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:17.176345  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:17.197822  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:17.239336  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:17.320916  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:17.482424  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:17.596953  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:17.804508  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-j4w97" [402e78a2-661a-4ebb-87c2-44ee291f709c] Running
E1026 15:16:18.445863  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:19.727317  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:22.198385  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:22.289026  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003658952s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-498531 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-498531 "pgrep -a kubelet"
I1026 15:16:25.442269  845095 config.go:182] Loaded profile config "enable-default-cni-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-498531 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l4tw4" [c9b1feae-4bab-4ec6-bfce-db3b8aa4e5f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 15:16:27.411345  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/no-preload-475081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-l4tw4" [c9b1feae-4bab-4ec6-bfce-db3b8aa4e5f6] Running
E1026 15:16:32.440291  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/old-k8s-version-330914/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:16:34.525761  845095 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/addons-459729/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003538092s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-498531 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-498531 "pgrep -a kubelet"
I1026 15:17:00.615381  845095 config.go:182] Loaded profile config "bridge-498531": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-498531 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sgjpr" [dbfd2e92-c0cd-444f-b464-c2d5bdad4c8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sgjpr" [dbfd2e92-c0cd-444f-b464-c2d5bdad4c8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003322739s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-498531 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-498531 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (26/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-619402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-619402
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-498531 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-498531" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:08:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-176599
contexts:
- context:
cluster: kubernetes-upgrade-176599
user: kubernetes-upgrade-176599
name: kubernetes-upgrade-176599
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-176599
user:
client-certificate: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/kubernetes-upgrade-176599/client.crt
client-key: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/kubernetes-upgrade-176599/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-498531

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-498531"

                                                
                                                
----------------------- debugLogs end: kubenet-498531 [took: 3.662348821s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-498531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-498531
--- SKIP: TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-498531 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-498531" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:08:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-176599
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-841519/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:09:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-886432
contexts:
- context:
cluster: kubernetes-upgrade-176599
user: kubernetes-upgrade-176599
name: kubernetes-upgrade-176599
- context:
cluster: stopped-upgrade-886432
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:09:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: stopped-upgrade-886432
name: stopped-upgrade-886432
current-context: stopped-upgrade-886432
kind: Config
users:
- name: kubernetes-upgrade-176599
user:
client-certificate: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/kubernetes-upgrade-176599/client.crt
client-key: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/kubernetes-upgrade-176599/client.key
- name: stopped-upgrade-886432
user:
client-certificate: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/stopped-upgrade-886432/client.crt
client-key: /home/jenkins/minikube-integration/21664-841519/.minikube/profiles/stopped-upgrade-886432/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-498531

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-498531" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-498531"

                                                
                                                
----------------------- debugLogs end: cilium-498531 [took: 4.455713886s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-498531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-498531
--- SKIP: TestNetworkPlugins/group/cilium (4.66s)

                                                
                                    
Copied to clipboard